You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@kafka.apache.org by Paolo Patierno <pp...@live.com> on 2017/10/23 09:06:25 UTC

Metadata class doesn't "expose" topics with errors

Hi devs,

while developing the KIP-204 (having delete records operation in the "new" Admin Client) I'm facing with the following doubt (or maybe a lack of info) ...


As described by KIP-107 (which implements this feature at protocol level and in the "legacy" Admin Client), the request needs to be sent to the leader.


For both KIPs, the operation has a Map<TopicPartition, offset> (offset is a long in the "legacy" API but it's becoming to be a class in the "new" API) and in order to reduce the number of requests to different leaders, my code groups partitions having same leader so having a Map<Node, Map<TopicPartition, offset>>.


In order to know the leaders I need to request metadata and there are two ways for doing that :


  *   using something like the producer does with Metadata class, putting the topics, request update and waiting for it
  *   using the low level MetadataRequest and handling the related response (which is what the "legacy" API does today)

I noticed that building the Cluster object from the MetadataResponse, the topics with errors are skipped and it means that in the final "high level" Metadata class (fetching the Cluster object) there is no information about them. So with first solution we have no info about topics with errors (maybe the only errors I'm able to handle is the "LEADER_NOT_AVAILABLE", if leaderFor() on the Cluster returns a null Node).

Is there any specific reason why "topics with errors" are not exposed in the Metadata instance ?
Is the preferred pattern using the low level protocol stuff in such case ?

Thanks


Paolo Patierno
Senior Software Engineer (IoT) @ Red Hat
Microsoft MVP on Azure & IoT
Microsoft Azure Advisor

Twitter : @ppatierno<http://twitter.com/ppatierno>
Linkedin : paolopatierno<http://it.linkedin.com/in/paolopatierno>
Blog : DevExperience<http://paolopatierno.wordpress.com/>

Re: Metadata class doesn't "expose" topics with errors

Posted by Paolo Patierno <pp...@live.com>.
I'd like just to add that in my specific case I could resolve creating a new NodeProvider implementation like LeaderForProvider in order to send the request to the right leader for each topic.

The drawback of this solution is not grouping topics for the same leader so not having only one requests but different of them (even if for the same leader).


Paolo Patierno
Senior Software Engineer (IoT) @ Red Hat
Microsoft MVP on Azure & IoT
Microsoft Azure Advisor

Twitter : @ppatierno<http://twitter.com/ppatierno>
Linkedin : paolopatierno<http://it.linkedin.com/in/paolopatierno>
Blog : DevExperience<http://paolopatierno.wordpress.com/>


________________________________
From: Paolo Patierno <pp...@live.com>
Sent: Monday, October 23, 2017 9:06 AM
To: dev@kafka.apache.org
Subject: Metadata class doesn't "expose" topics with errors

Hi devs,

while developing the KIP-204 (having delete records operation in the "new" Admin Client) I'm facing with the following doubt (or maybe a lack of info) ...


As described by KIP-107 (which implements this feature at protocol level and in the "legacy" Admin Client), the request needs to be sent to the leader.


For both KIPs, the operation has a Map<TopicPartition, offset> (offset is a long in the "legacy" API but it's becoming to be a class in the "new" API) and in order to reduce the number of requests to different leaders, my code groups partitions having same leader so having a Map<Node, Map<TopicPartition, offset>>.


In order to know the leaders I need to request metadata and there are two ways for doing that :


  *   using something like the producer does with Metadata class, putting the topics, request update and waiting for it
  *   using the low level MetadataRequest and handling the related response (which is what the "legacy" API does today)

I noticed that building the Cluster object from the MetadataResponse, the topics with errors are skipped and it means that in the final "high level" Metadata class (fetching the Cluster object) there is no information about them. So with first solution we have no info about topics with errors (maybe the only errors I'm able to handle is the "LEADER_NOT_AVAILABLE", if leaderFor() on the Cluster returns a null Node).

Is there any specific reason why "topics with errors" are not exposed in the Metadata instance ?
Is the preferred pattern using the low level protocol stuff in such case ?

Thanks


Paolo Patierno
Senior Software Engineer (IoT) @ Red Hat
Microsoft MVP on Azure & IoT
Microsoft Azure Advisor

Twitter : @ppatierno<http://twitter.com/ppatierno>
Linkedin : paolopatierno<http://it.linkedin.com/in/paolopatierno>
Blog : DevExperience<http://paolopatierno.wordpress.com/>

Re: Metadata class doesn't "expose" topics with errors

Posted by Paolo Patierno <pp...@live.com>.
Hi Guozhang,

thanks ! Really appreciated !
Yes I think that at this point, having an implementation proposal, it makes more sense to comment on the PR directly.

Thanks
Paolo
________________________________
From: Guozhang Wang <wa...@gmail.com>
Sent: Tuesday, October 31, 2017 10:00:22 PM
To: dev@kafka.apache.org
Subject: Re: Metadata class doesn't "expose" topics with errors

Hello Paolo,

I'm looking at your PR for KIP-204 now. Will reply on the discussion thread
/ PR diff file directly if I find anything.


Guozhang

On Tue, Oct 24, 2017 at 5:45 AM, Paolo Patierno <pp...@live.com> wrote:

> Hi Guozhang,
>
> thanks for replying !
>
>
> I see your point about the Metadata class which doesn't need to expose
> errors because transient.
>
>
> Regarding the KIP-204, the delete operations in the "legacy" client
> doesn't have any retry logic but it just returns the error to the user
> which should retry himself (on topics where the operation failed).
>
> If I should add a retry logic in the "new" admin client, considering a
> delete records operation on more topics partitions at same time, I should
> retry if at least one of the topics partitions will come with a
> LEADER_NOT_AVAILABLE (after metadata request), without going on with other
> topic partitions which have leaders.
>
> Maybe it's better to continue with the operations on such topics and come
> back to the user with a LEADER_NOT_AVAILABLE for the others (it's the
> current behaviour with "legacy" admin client).
>
>
> For now the current implementation I have (I'll push a PR soon), use the
> Call class for sending a MetadataRequest and then its handleResponse for
> using another Call instance for sending the DeleteRecordsRequest.
>
>
> Thanks
>
>
> Paolo Patierno
> Senior Software Engineer (IoT) @ Red Hat
> Microsoft MVP on Azure & IoT
> Microsoft Azure Advisor
>
> Twitter : @ppatierno<http://twitter.com/ppatierno>
> Linkedin : paolopatierno<http://it.linkedin.com/in/paolopatierno>
> Blog : DevExperience<http://paolopatierno.wordpress.com/>
>
>
> ________________________________
> From: Guozhang Wang <wa...@gmail.com>
> Sent: Tuesday, October 24, 2017 12:52 AM
> To: dev@kafka.apache.org
> Subject: Re: Metadata class doesn't "expose" topics with errors
>
> Hello Paolo,
>
> The reason we filtered the errors in the topics in the generated Cluster is
> that Metadata and its "fetch()" returned Cluster is a common class that is
> used among all clients (producer, consumer, connect, streams, admin), and
> is treated as a high-level representation of the current snapshot of the
> hosted topic information of the cluster, and hence we intentionally exclude
> any transient errors in the representation to abstract such issues away
> from its users.
>
> As for your implementation on KIP-204, I think just wait-and-retry for the
> updated metadata.fetch() Cluster contain the leader information for the
> topic is fine: since if a LEADER_NOT_AVAILABLE is returned you'll need to
> backoff and retry anyways, right?
>
>
> Guozhang
>
>
>
> On Mon, Oct 23, 2017 at 2:36 AM, Paolo Patierno <pp...@live.com>
> wrote:
>
> > Finally another plan could be to use nesting of runnable calls.
> >
> > The first one for asking metadata (using the MetadataRequest which
> > provides us all the errors) and then sending the delete records requests
> in
> > the handleResponse() of such metadata request.
> >
> >
> > Paolo Patierno
> > Senior Software Engineer (IoT) @ Red Hat
> > Microsoft MVP on Azure & IoT
> > Microsoft Azure Advisor
> >
> > Twitter : @ppatierno<http://twitter.com/ppatierno>
> > Linkedin : paolopatierno<http://it.linkedin.com/in/paolopatierno>
> > Blog : DevExperience<http://paolopatierno.wordpress.com/>
> >
> >
> > ________________________________
> > From: Paolo Patierno <pp...@live.com>
> > Sent: Monday, October 23, 2017 9:06 AM
> > To: dev@kafka.apache.org
> > Subject: Metadata class doesn't "expose" topics with errors
> >
> > Hi devs,
> >
> > while developing the KIP-204 (having delete records operation in the
> "new"
> > Admin Client) I'm facing with the following doubt (or maybe a lack of
> info)
> > ...
> >
> >
> > As described by KIP-107 (which implements this feature at protocol level
> > and in the "legacy" Admin Client), the request needs to be sent to the
> > leader.
> >
> >
> > For both KIPs, the operation has a Map<TopicPartition, offset> (offset is
> > a long in the "legacy" API but it's becoming to be a class in the "new"
> > API) and in order to reduce the number of requests to different leaders,
> my
> > code groups partitions having same leader so having a Map<Node,
> > Map<TopicPartition, offset>>.
> >
> >
> > In order to know the leaders I need to request metadata and there are two
> > ways for doing that :
> >
> >
> >   *   using something like the producer does with Metadata class, putting
> > the topics, request update and waiting for it
> >   *   using the low level MetadataRequest and handling the related
> > response (which is what the "legacy" API does today)
> >
> > I noticed that building the Cluster object from the MetadataResponse, the
> > topics with errors are skipped and it means that in the final "high
> level"
> > Metadata class (fetching the Cluster object) there is no information
> about
> > them. So with first solution we have no info about topics with errors
> > (maybe the only errors I'm able to handle is the "LEADER_NOT_AVAILABLE",
> if
> > leaderFor() on the Cluster returns a null Node).
> >
> > Is there any specific reason why "topics with errors" are not exposed in
> > the Metadata instance ?
> > Is the preferred pattern using the low level protocol stuff in such case
> ?
> >
> > Thanks
> >
> >
> > Paolo Patierno
> > Senior Software Engineer (IoT) @ Red Hat
> > Microsoft MVP on Azure & IoT
> > Microsoft Azure Advisor
> >
> > Twitter : @ppatierno<http://twitter.com/ppatierno>
> > Linkedin : paolopatierno<http://it.linkedin.com/in/paolopatierno>
> > Blog : DevExperience<http://paolopatierno.wordpress.com/>
> >
>
>
>
> --
> -- Guozhang
>



--
-- Guozhang

Re: Metadata class doesn't "expose" topics with errors

Posted by Guozhang Wang <wa...@gmail.com>.
Hello Paolo,

I'm looking at your PR for KIP-204 now. Will reply on the discussion thread
/ PR diff file directly if I find anything.


Guozhang

On Tue, Oct 24, 2017 at 5:45 AM, Paolo Patierno <pp...@live.com> wrote:

> Hi Guozhang,
>
> thanks for replying !
>
>
> I see your point about the Metadata class which doesn't need to expose
> errors because transient.
>
>
> Regarding the KIP-204, the delete operations in the "legacy" client
> doesn't have any retry logic but it just returns the error to the user
> which should retry himself (on topics where the operation failed).
>
> If I should add a retry logic in the "new" admin client, considering a
> delete records operation on more topics partitions at same time, I should
> retry if at least one of the topics partitions will come with a
> LEADER_NOT_AVAILABLE (after metadata request), without going on with other
> topic partitions which have leaders.
>
> Maybe it's better to continue with the operations on such topics and come
> back to the user with a LEADER_NOT_AVAILABLE for the others (it's the
> current behaviour with "legacy" admin client).
>
>
> For now the current implementation I have (I'll push a PR soon), use the
> Call class for sending a MetadataRequest and then its handleResponse for
> using another Call instance for sending the DeleteRecordsRequest.
>
>
> Thanks
>
>
> Paolo Patierno
> Senior Software Engineer (IoT) @ Red Hat
> Microsoft MVP on Azure & IoT
> Microsoft Azure Advisor
>
> Twitter : @ppatierno<http://twitter.com/ppatierno>
> Linkedin : paolopatierno<http://it.linkedin.com/in/paolopatierno>
> Blog : DevExperience<http://paolopatierno.wordpress.com/>
>
>
> ________________________________
> From: Guozhang Wang <wa...@gmail.com>
> Sent: Tuesday, October 24, 2017 12:52 AM
> To: dev@kafka.apache.org
> Subject: Re: Metadata class doesn't "expose" topics with errors
>
> Hello Paolo,
>
> The reason we filtered the errors in the topics in the generated Cluster is
> that Metadata and its "fetch()" returned Cluster is a common class that is
> used among all clients (producer, consumer, connect, streams, admin), and
> is treated as a high-level representation of the current snapshot of the
> hosted topic information of the cluster, and hence we intentionally exclude
> any transient errors in the representation to abstract such issues away
> from its users.
>
> As for your implementation on KIP-204, I think just wait-and-retry for the
> updated metadata.fetch() Cluster contain the leader information for the
> topic is fine: since if a LEADER_NOT_AVAILABLE is returned you'll need to
> backoff and retry anyways, right?
>
>
> Guozhang
>
>
>
> On Mon, Oct 23, 2017 at 2:36 AM, Paolo Patierno <pp...@live.com>
> wrote:
>
> > Finally another plan could be to use nesting of runnable calls.
> >
> > The first one for asking metadata (using the MetadataRequest which
> > provides us all the errors) and then sending the delete records requests
> in
> > the handleResponse() of such metadata request.
> >
> >
> > Paolo Patierno
> > Senior Software Engineer (IoT) @ Red Hat
> > Microsoft MVP on Azure & IoT
> > Microsoft Azure Advisor
> >
> > Twitter : @ppatierno<http://twitter.com/ppatierno>
> > Linkedin : paolopatierno<http://it.linkedin.com/in/paolopatierno>
> > Blog : DevExperience<http://paolopatierno.wordpress.com/>
> >
> >
> > ________________________________
> > From: Paolo Patierno <pp...@live.com>
> > Sent: Monday, October 23, 2017 9:06 AM
> > To: dev@kafka.apache.org
> > Subject: Metadata class doesn't "expose" topics with errors
> >
> > Hi devs,
> >
> > while developing the KIP-204 (having delete records operation in the
> "new"
> > Admin Client) I'm facing with the following doubt (or maybe a lack of
> info)
> > ...
> >
> >
> > As described by KIP-107 (which implements this feature at protocol level
> > and in the "legacy" Admin Client), the request needs to be sent to the
> > leader.
> >
> >
> > For both KIPs, the operation has a Map<TopicPartition, offset> (offset is
> > a long in the "legacy" API but it's becoming to be a class in the "new"
> > API) and in order to reduce the number of requests to different leaders,
> my
> > code groups partitions having same leader so having a Map<Node,
> > Map<TopicPartition, offset>>.
> >
> >
> > In order to know the leaders I need to request metadata and there are two
> > ways for doing that :
> >
> >
> >   *   using something like the producer does with Metadata class, putting
> > the topics, request update and waiting for it
> >   *   using the low level MetadataRequest and handling the related
> > response (which is what the "legacy" API does today)
> >
> > I noticed that building the Cluster object from the MetadataResponse, the
> > topics with errors are skipped and it means that in the final "high
> level"
> > Metadata class (fetching the Cluster object) there is no information
> about
> > them. So with first solution we have no info about topics with errors
> > (maybe the only errors I'm able to handle is the "LEADER_NOT_AVAILABLE",
> if
> > leaderFor() on the Cluster returns a null Node).
> >
> > Is there any specific reason why "topics with errors" are not exposed in
> > the Metadata instance ?
> > Is the preferred pattern using the low level protocol stuff in such case
> ?
> >
> > Thanks
> >
> >
> > Paolo Patierno
> > Senior Software Engineer (IoT) @ Red Hat
> > Microsoft MVP on Azure & IoT
> > Microsoft Azure Advisor
> >
> > Twitter : @ppatierno<http://twitter.com/ppatierno>
> > Linkedin : paolopatierno<http://it.linkedin.com/in/paolopatierno>
> > Blog : DevExperience<http://paolopatierno.wordpress.com/>
> >
>
>
>
> --
> -- Guozhang
>



-- 
-- Guozhang

Re: Metadata class doesn't "expose" topics with errors

Posted by Paolo Patierno <pp...@live.com>.
Hi Guozhang,

thanks for replying !


I see your point about the Metadata class which doesn't need to expose errors because transient.


Regarding the KIP-204, the delete operations in the "legacy" client doesn't have any retry logic but it just returns the error to the user which should retry himself (on topics where the operation failed).

If I should add a retry logic in the "new" admin client, considering a delete records operation on more topics partitions at same time, I should retry if at least one of the topics partitions will come with a LEADER_NOT_AVAILABLE (after metadata request), without going on with other topic partitions which have leaders.

Maybe it's better to continue with the operations on such topics and come back to the user with a LEADER_NOT_AVAILABLE for the others (it's the current behaviour with "legacy" admin client).


For now the current implementation I have (I'll push a PR soon), use the Call class for sending a MetadataRequest and then its handleResponse for using another Call instance for sending the DeleteRecordsRequest.


Thanks


Paolo Patierno
Senior Software Engineer (IoT) @ Red Hat
Microsoft MVP on Azure & IoT
Microsoft Azure Advisor

Twitter : @ppatierno<http://twitter.com/ppatierno>
Linkedin : paolopatierno<http://it.linkedin.com/in/paolopatierno>
Blog : DevExperience<http://paolopatierno.wordpress.com/>


________________________________
From: Guozhang Wang <wa...@gmail.com>
Sent: Tuesday, October 24, 2017 12:52 AM
To: dev@kafka.apache.org
Subject: Re: Metadata class doesn't "expose" topics with errors

Hello Paolo,

The reason we filtered the errors in the topics in the generated Cluster is
that Metadata and its "fetch()" returned Cluster is a common class that is
used among all clients (producer, consumer, connect, streams, admin), and
is treated as a high-level representation of the current snapshot of the
hosted topic information of the cluster, and hence we intentionally exclude
any transient errors in the representation to abstract such issues away
from its users.

As for your implementation on KIP-204, I think just wait-and-retry for the
updated metadata.fetch() Cluster contain the leader information for the
topic is fine: since if a LEADER_NOT_AVAILABLE is returned you'll need to
backoff and retry anyways, right?


Guozhang



On Mon, Oct 23, 2017 at 2:36 AM, Paolo Patierno <pp...@live.com> wrote:

> Finally another plan could be to use nesting of runnable calls.
>
> The first one for asking metadata (using the MetadataRequest which
> provides us all the errors) and then sending the delete records requests in
> the handleResponse() of such metadata request.
>
>
> Paolo Patierno
> Senior Software Engineer (IoT) @ Red Hat
> Microsoft MVP on Azure & IoT
> Microsoft Azure Advisor
>
> Twitter : @ppatierno<http://twitter.com/ppatierno>
> Linkedin : paolopatierno<http://it.linkedin.com/in/paolopatierno>
> Blog : DevExperience<http://paolopatierno.wordpress.com/>
>
>
> ________________________________
> From: Paolo Patierno <pp...@live.com>
> Sent: Monday, October 23, 2017 9:06 AM
> To: dev@kafka.apache.org
> Subject: Metadata class doesn't "expose" topics with errors
>
> Hi devs,
>
> while developing the KIP-204 (having delete records operation in the "new"
> Admin Client) I'm facing with the following doubt (or maybe a lack of info)
> ...
>
>
> As described by KIP-107 (which implements this feature at protocol level
> and in the "legacy" Admin Client), the request needs to be sent to the
> leader.
>
>
> For both KIPs, the operation has a Map<TopicPartition, offset> (offset is
> a long in the "legacy" API but it's becoming to be a class in the "new"
> API) and in order to reduce the number of requests to different leaders, my
> code groups partitions having same leader so having a Map<Node,
> Map<TopicPartition, offset>>.
>
>
> In order to know the leaders I need to request metadata and there are two
> ways for doing that :
>
>
>   *   using something like the producer does with Metadata class, putting
> the topics, request update and waiting for it
>   *   using the low level MetadataRequest and handling the related
> response (which is what the "legacy" API does today)
>
> I noticed that building the Cluster object from the MetadataResponse, the
> topics with errors are skipped and it means that in the final "high level"
> Metadata class (fetching the Cluster object) there is no information about
> them. So with first solution we have no info about topics with errors
> (maybe the only errors I'm able to handle is the "LEADER_NOT_AVAILABLE", if
> leaderFor() on the Cluster returns a null Node).
>
> Is there any specific reason why "topics with errors" are not exposed in
> the Metadata instance ?
> Is the preferred pattern using the low level protocol stuff in such case ?
>
> Thanks
>
>
> Paolo Patierno
> Senior Software Engineer (IoT) @ Red Hat
> Microsoft MVP on Azure & IoT
> Microsoft Azure Advisor
>
> Twitter : @ppatierno<http://twitter.com/ppatierno>
> Linkedin : paolopatierno<http://it.linkedin.com/in/paolopatierno>
> Blog : DevExperience<http://paolopatierno.wordpress.com/>
>



--
-- Guozhang

Re: Metadata class doesn't "expose" topics with errors

Posted by Guozhang Wang <wa...@gmail.com>.
Hello Paolo,

The reason we filtered the errors in the topics in the generated Cluster is
that Metadata and its "fetch()" returned Cluster is a common class that is
used among all clients (producer, consumer, connect, streams, admin), and
is treated as a high-level representation of the current snapshot of the
hosted topic information of the cluster, and hence we intentionally exclude
any transient errors in the representation to abstract such issues away
from its users.

As for your implementation on KIP-204, I think just wait-and-retry for the
updated metadata.fetch() Cluster contain the leader information for the
topic is fine: since if a LEADER_NOT_AVAILABLE is returned you'll need to
backoff and retry anyways, right?


Guozhang



On Mon, Oct 23, 2017 at 2:36 AM, Paolo Patierno <pp...@live.com> wrote:

> Finally another plan could be to use nesting of runnable calls.
>
> The first one for asking metadata (using the MetadataRequest which
> provides us all the errors) and then sending the delete records requests in
> the handleResponse() of such metadata request.
>
>
> Paolo Patierno
> Senior Software Engineer (IoT) @ Red Hat
> Microsoft MVP on Azure & IoT
> Microsoft Azure Advisor
>
> Twitter : @ppatierno<http://twitter.com/ppatierno>
> Linkedin : paolopatierno<http://it.linkedin.com/in/paolopatierno>
> Blog : DevExperience<http://paolopatierno.wordpress.com/>
>
>
> ________________________________
> From: Paolo Patierno <pp...@live.com>
> Sent: Monday, October 23, 2017 9:06 AM
> To: dev@kafka.apache.org
> Subject: Metadata class doesn't "expose" topics with errors
>
> Hi devs,
>
> while developing the KIP-204 (having delete records operation in the "new"
> Admin Client) I'm facing with the following doubt (or maybe a lack of info)
> ...
>
>
> As described by KIP-107 (which implements this feature at protocol level
> and in the "legacy" Admin Client), the request needs to be sent to the
> leader.
>
>
> For both KIPs, the operation has a Map<TopicPartition, offset> (offset is
> a long in the "legacy" API but it's becoming to be a class in the "new"
> API) and in order to reduce the number of requests to different leaders, my
> code groups partitions having same leader so having a Map<Node,
> Map<TopicPartition, offset>>.
>
>
> In order to know the leaders I need to request metadata and there are two
> ways for doing that :
>
>
>   *   using something like the producer does with Metadata class, putting
> the topics, request update and waiting for it
>   *   using the low level MetadataRequest and handling the related
> response (which is what the "legacy" API does today)
>
> I noticed that building the Cluster object from the MetadataResponse, the
> topics with errors are skipped and it means that in the final "high level"
> Metadata class (fetching the Cluster object) there is no information about
> them. So with first solution we have no info about topics with errors
> (maybe the only errors I'm able to handle is the "LEADER_NOT_AVAILABLE", if
> leaderFor() on the Cluster returns a null Node).
>
> Is there any specific reason why "topics with errors" are not exposed in
> the Metadata instance ?
> Is the preferred pattern using the low level protocol stuff in such case ?
>
> Thanks
>
>
> Paolo Patierno
> Senior Software Engineer (IoT) @ Red Hat
> Microsoft MVP on Azure & IoT
> Microsoft Azure Advisor
>
> Twitter : @ppatierno<http://twitter.com/ppatierno>
> Linkedin : paolopatierno<http://it.linkedin.com/in/paolopatierno>
> Blog : DevExperience<http://paolopatierno.wordpress.com/>
>



-- 
-- Guozhang

Re: Metadata class doesn't "expose" topics with errors

Posted by Paolo Patierno <pp...@live.com>.
Finally another plan could be to use nesting of runnable calls.

The first one for asking metadata (using the MetadataRequest which provides us all the errors) and then sending the delete records requests in the handleResponse() of such metadata request.


Paolo Patierno
Senior Software Engineer (IoT) @ Red Hat
Microsoft MVP on Azure & IoT
Microsoft Azure Advisor

Twitter : @ppatierno<http://twitter.com/ppatierno>
Linkedin : paolopatierno<http://it.linkedin.com/in/paolopatierno>
Blog : DevExperience<http://paolopatierno.wordpress.com/>


________________________________
From: Paolo Patierno <pp...@live.com>
Sent: Monday, October 23, 2017 9:06 AM
To: dev@kafka.apache.org
Subject: Metadata class doesn't "expose" topics with errors

Hi devs,

while developing the KIP-204 (having delete records operation in the "new" Admin Client) I'm facing with the following doubt (or maybe a lack of info) ...


As described by KIP-107 (which implements this feature at protocol level and in the "legacy" Admin Client), the request needs to be sent to the leader.


For both KIPs, the operation has a Map<TopicPartition, offset> (offset is a long in the "legacy" API but it's becoming to be a class in the "new" API) and in order to reduce the number of requests to different leaders, my code groups partitions having same leader so having a Map<Node, Map<TopicPartition, offset>>.


In order to know the leaders I need to request metadata and there are two ways for doing that :


  *   using something like the producer does with Metadata class, putting the topics, request update and waiting for it
  *   using the low level MetadataRequest and handling the related response (which is what the "legacy" API does today)

I noticed that building the Cluster object from the MetadataResponse, the topics with errors are skipped and it means that in the final "high level" Metadata class (fetching the Cluster object) there is no information about them. So with first solution we have no info about topics with errors (maybe the only errors I'm able to handle is the "LEADER_NOT_AVAILABLE", if leaderFor() on the Cluster returns a null Node).

Is there any specific reason why "topics with errors" are not exposed in the Metadata instance ?
Is the preferred pattern using the low level protocol stuff in such case ?

Thanks


Paolo Patierno
Senior Software Engineer (IoT) @ Red Hat
Microsoft MVP on Azure & IoT
Microsoft Azure Advisor

Twitter : @ppatierno<http://twitter.com/ppatierno>
Linkedin : paolopatierno<http://it.linkedin.com/in/paolopatierno>
Blog : DevExperience<http://paolopatierno.wordpress.com/>