You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@hive.apache.org by "shezm (Jira)" <ji...@apache.org> on 2022/04/27 06:41:00 UTC

[jira] [Resolved] (HIVE-25416) Hive metastore memory leak because datanucleus-api-jdo bug

     [ https://issues.apache.org/jira/browse/HIVE-25416?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

shezm resolved HIVE-25416.
--------------------------
    Fix Version/s: 4.0.0
                       (was: 3.1.3)
         Assignee: shezm  (was: shezm)
       Resolution: Fixed

> Hive metastore memory leak because datanucleus-api-jdo bug
> ----------------------------------------------------------
>
>                 Key: HIVE-25416
>                 URL: https://issues.apache.org/jira/browse/HIVE-25416
>             Project: Hive
>          Issue Type: Bug
>          Components: Standalone Metastore
>    Affects Versions: 3.1.2
>            Reporter: shezm
>            Assignee: shezm
>            Priority: Major
>              Labels: pull-request-available
>             Fix For: 4.0.0
>
>         Attachments: leak.jpg
>
>          Time Spent: 2h 10m
>  Remaining Estimate: 0h
>
> I encountered a memory leak case. The MAT info :
> !leak.jpg!
> Full error message is :
> {code:java}
> Cannot get Long result for param = 8 for column "`FUNCS`.`FUNC_ID`" : Operation not allowed after ResultSet closed{code}
> This is because there is a bug in the JDOPersistenceManager.retrieveAll code.
> {code:java}
> // code placeholder
> JDOPersistenceManager{
> public void retrieveAll(Collection pcs, boolean useFetchPlan) {
>     this.assertIsOpen();
>     ArrayList failures = new ArrayList();
>     Iterator i = pcs.iterator();
>     while(i.hasNext()) {
>         try {
>             this.jdoRetrieve(i.next(), useFetchPlan);
>         } catch (RuntimeException var6) {
>             failures.add(var6);
>         }
>     }
>     if (!failures.isEmpty()) {
>         throw new JDOUserException(Localiser.msg("010038"), (Exception[])((Exception[])failures.toArray(new Exception[failures.size()])));
>     }
> }
> }
> {code}
> In some extreme cases   the function of next() does not work . This will result in a very large failures ArrayList like as shown above.
>  
> The bug detail can see this : [https://github.com/datanucleus/datanucleus-api-jdo/issues/106]
> This problem is fixed in datanucleus-api-jdo version 5.2.6. So we should upgrade it .
>  
>  
>  



--
This message was sent by Atlassian Jira
(v8.20.7#820007)