You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@ambari.apache.org by "Aravindan Vijayan (JIRA)" <ji...@apache.org> on 2015/10/22 20:09:27 UTC
[jira] [Commented] (AMBARI-13411) Problem in precision handling of
metrics returned by AMS
[ https://issues.apache.org/jira/browse/AMBARI-13411?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14969563#comment-14969563 ]
Aravindan Vijayan commented on AMBARI-13411:
--------------------------------------------
Review board
https://reviews.apache.org/r/39543/
> Problem in precision handling of metrics returned by AMS
> --------------------------------------------------------
>
> Key: AMBARI-13411
> URL: https://issues.apache.org/jira/browse/AMBARI-13411
> Project: Ambari
> Issue Type: Bug
> Components: ambari-metrics
> Affects Versions: 2.1.2
> Reporter: Siddharth Wagle
> Assignee: Aravindan Vijayan
> Priority: Critical
> Fix For: 2.1.3
>
>
> - Exception in the ambari-server log:
> {code}
> WARN [qtp-client-60803] ObjectGraphWalker:209 - The configured limit of
> 1,000 object references was reached while attempting to calculate the
> size of the object graph. Severe performance degradation could occur if
> the sizing operation continues. This can be avoided by setting the
> CacheManger or Cache <sizeOfPolicy> elements maxDepthExceededBehavior to
> "abort" or adding stop points with @IgnoreSizeOf annotations. If
> performance degradation is NOT an issue at the configured limit, raise
> the limit value using the CacheManager or Cache <sizeOfPolicy> elements
> {code}
> - Suggestion:
> -- Set size of policy to a max depth of 10,000
> -- Set max depth exceeded behavior to continue with calculations
> - It would be good to get metrics on on how much is the perf degradation
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)