You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@ozone.apache.org by "Wei-Chiu Chuang (Jira)" <ji...@apache.org> on 2022/05/10 20:10:00 UTC

[jira] [Updated] (HDDS-6722) Memory leak after updating to RocksDB 7

     [ https://issues.apache.org/jira/browse/HDDS-6722?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Wei-Chiu Chuang updated HDDS-6722:
----------------------------------
    Description: 
After HDDS-6456 updated RocksDB version from 6.25.3 to 7.0.4, we started to experience OOM at Ozone Manager very quickly. 



In our test environment, the OM memory stayed at around 3GB. But after that, OM memory quickly increase to over 100GB and crashed within 2 hours.

After some investigation and discussion with the RocksDB developers (https://github.com/facebook/rocksdb/issues/9962) we were able to isolate the issue to this particular breaking change: https://github.com/facebook/rocksdb/commit/99d86252b

Prior to the change, RocksDB 6 automatically reclaims native memory when the corresponding Java object is garbage collected. After the change (RocksDB 7), applications must proactively close the objects unused.

The biggest offenders are the RocksIterator. But there are many others. I'll file jira to clean up their usage. Unfortunately, SonarCloud doesn't seem to detect these unclosed objects.

  was:
After HDDS-6456 updated RocksDB version from 6.25.3 to 7.0.4, we started to experience OOM at Ozone Manager very quickly. 



In our test environment, the OM memory stayed at around 3GB. But after that, OM memory quickly increase to over 100GB and crashed within 2 hours.

After some investigation and discussion with the RocksDB developers (https://github.com/facebook/rocksdb/issues/9962) we were able to isolate the issue to this particular breaking change: https://github.com/facebook/rocksdb/commit/99d86252b

Prior to the change, RocksDB 6 automatically reclaims native memory when the corresponding Java object is garbage collected. After the change (RocksDB 7), applications must proactively close the objects unused.

The biggest offenders are the RocksIterator. But there are many others. I'll file jira to clean up their usage.


> Memory leak after updating to RocksDB 7
> ---------------------------------------
>
>                 Key: HDDS-6722
>                 URL: https://issues.apache.org/jira/browse/HDDS-6722
>             Project: Apache Ozone
>          Issue Type: Bug
>    Affects Versions: 1.3.0
>            Reporter: Wei-Chiu Chuang
>            Priority: Blocker
>         Attachments: om rocksdb crash.png
>
>
> After HDDS-6456 updated RocksDB version from 6.25.3 to 7.0.4, we started to experience OOM at Ozone Manager very quickly. 
> In our test environment, the OM memory stayed at around 3GB. But after that, OM memory quickly increase to over 100GB and crashed within 2 hours.
> After some investigation and discussion with the RocksDB developers (https://github.com/facebook/rocksdb/issues/9962) we were able to isolate the issue to this particular breaking change: https://github.com/facebook/rocksdb/commit/99d86252b
> Prior to the change, RocksDB 6 automatically reclaims native memory when the corresponding Java object is garbage collected. After the change (RocksDB 7), applications must proactively close the objects unused.
> The biggest offenders are the RocksIterator. But there are many others. I'll file jira to clean up their usage. Unfortunately, SonarCloud doesn't seem to detect these unclosed objects.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@ozone.apache.org
For additional commands, e-mail: issues-help@ozone.apache.org