You are viewing a plain text version of this content. The canonical link for it is here.
Posted to reviews@spark.apache.org by "venkateshbalaji99 (via GitHub)" <gi...@apache.org> on 2023/08/23 16:37:16 UTC

[GitHub] [spark] venkateshbalaji99 commented on pull request #41199: [SPARK-43536][CORE] Fixing statsd sink reporter

venkateshbalaji99 commented on PR #41199:
URL: https://github.com/apache/spark/pull/41199#issuecomment-1690283871

   Hi @paymog , as you have pointed out, this seems to be an issue in flink too, where the metric value is not being decremented upon each push, while statsd expects it to. But the motivation behind not wanting to add a decrement logic here is because this system is more resilient towards intermittent failures (like some packets being lost), since the total value will still be retained, whereas in the case where we opt to send delta ourselves, these kind of error might start up adding  up over time. Since gauge metrics match our use case fully, I think remapping the spark counter metrics to be interpreted as gauge by statsD would be the simplest solution.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org