You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@pulsar.apache.org by GitBox <gi...@apache.org> on 2022/07/27 20:51:43 UTC

[GitHub] [pulsar-client-go] michaeljmarshall commented on a diff in pull request #815: [issue 814] consumer and producer reconnect failure metrics counter

michaeljmarshall commented on code in PR #815:
URL: https://github.com/apache/pulsar-client-go/pull/815#discussion_r931568426


##########
pulsar/producer_partition.go:
##########
@@ -419,6 +419,9 @@ func (p *partitionProducer) reconnectToBroker() {
 		if maxRetry > 0 {
 			maxRetry--
 		}
+		if maxRetry == 0 || backoff.IsMaxBackoffReached() {
+			p.metrics.ProducersReconnectFailure.Inc()

Review Comment:
   Same comment here, but for the producer case.



##########
pulsar/consumer_partition.go:
##########
@@ -1155,6 +1155,9 @@ func (pc *partitionConsumer) reconnectToBroker() {
 		if maxRetry > 0 {
 			maxRetry--
 		}
+		if maxRetry == 0 || backoff.IsMaxBackoffReached() {
+			pc.metrics.ConsumersReconnectFailure.Inc()

Review Comment:
   This will only increment when the max retries have been exhausted, which won't occur when unlimited retries are enabled. I think it'd be reasonable to increment this metric for each failed attempt to reconnect to the broker. 
   
   > Because reconnecting to broker by producer/consumer creation has doubling back off retry, to reduce excessive retry failure noise, these two counters will only incremented by either of two conditions are met.
   
   Is there a reason you view regular (non final) failure as noise? Given that we're willing to log that the consumer failed to connect, I think it's reasonable to increment the metric.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@pulsar.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org