You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@hive.apache.org by "shezm (Jira)" <ji...@apache.org> on 2021/08/03 05:33:00 UTC

[jira] [Created] (HIVE-25416) Hive metastore leak memory cause datanucleus-api-jdo bug

shezm created HIVE-25416:
----------------------------

             Summary: Hive metastore leak memory cause datanucleus-api-jdo bug
                 Key: HIVE-25416
                 URL: https://issues.apache.org/jira/browse/HIVE-25416
             Project: Hive
          Issue Type: Bug
         Environment:  

 

 

 

 

 

 
            Reporter: shezm
            Assignee: shezm
         Attachments: leak.jpg

I encountered a memory leak case. The MAT info :

!leak.jpg!

Full error message is :
{code:java}
Cannot get Long result for param = 8 for column "`FUNCS`.`FUNC_ID`" : Operation not allowed after ResultSet closed{code}
This is because there is a bug in the JDOPersistenceManager.retrieveAll code.
{code:java}
// code placeholder
JDOPersistenceManager{
public void retrieveAll(Collection pcs, boolean useFetchPlan) {
    this.assertIsOpen();
    ArrayList failures = new ArrayList();
    Iterator i = pcs.iterator();

    while(i.hasNext()) {
        try {
            this.jdoRetrieve(i.next(), useFetchPlan);
        } catch (RuntimeException var6) {
            failures.add(var6);
        }
    }

    if (!failures.isEmpty()) {
        throw new JDOUserException(Localiser.msg("010038"), (Exception[])((Exception[])failures.toArray(new Exception[failures.size()])));
    }
}
}
{code}
In some extreme cases   the function of next() does not work . This will result in a very large list like as shown above.

 

The bug detail can see this : [https://github.com/datanucleus/datanucleus-api-jdo/issues/106]

This problem is fixed in datanucleus-api-jdo version 5.2.6. So we should upgrade it .

 

 

 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)