You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@knox.apache.org by "Mohammad Kamrul Islam (JIRA)" <ji...@apache.org> on 2017/08/15 07:21:00 UTC

[jira] [Updated] (KNOX-989) Revisit JMX Metrics to fix the Out of Memory issue

     [ https://issues.apache.org/jira/browse/KNOX-989?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Mohammad Kamrul Islam updated KNOX-989:
---------------------------------------
    Attachment: KNOX-989.1.patch

Please review.

> Revisit JMX Metrics to fix the Out of Memory issue
> --------------------------------------------------
>
>                 Key: KNOX-989
>                 URL: https://issues.apache.org/jira/browse/KNOX-989
>             Project: Apache Knox
>          Issue Type: Bug
>          Components: Server
>            Reporter: Sandeep More
>            Assignee: Mohammad Kamrul Islam
>             Fix For: 0.14.0
>
>         Attachments: KNOX-989.1.patch
>
>
> Bug  [KNOX-986|https://issues.apache.org/jira/browse/KNOX-986] uncovers problem with Metrics when large number of unique URLs are accessed via Knox. The problem here is that Knox creates metrics objects per unique URL, the metrics objects are not flushed out (for obvious reason - to maintain the metric state). 
> We need to come up with a proper fix to mitigate this while being able to use the JMX Metrics. 
> One way of doing this would be to have Metrics objects at service level ( e.g. /gateway/sandbox/webhdfs/* ) the other way would be to have a reaper process that clears out the unused objects. Other suggestions are welcomed !



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)