You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@felix.apache.org by "Jamie goodyear (JIRA)" <ji...@apache.org> on 2009/06/01 18:51:07 UTC

[jira] Updated: (FELIX-1192) KARAF: Locking error in DefaultJDBCLock, also contains an eventual OutOfMemory error on locked processes.

     [ https://issues.apache.org/jira/browse/FELIX-1192?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Jamie goodyear updated FELIX-1192:
----------------------------------

    Description: 
If you configure servicemix to use the DefaultJDBCLock like so:


karaf.lock=true
karaf.lock.class=org.apache.felix.karaf.main.DefaultJDBCLock
karaf.lock.level=50
karaf.lock.delay=1000
karaf.lock.jdbc.url=jdbc:mysql://localhost:3306/somedatabase
karaf.lock.jdbc.driver=org.apache.derby.jdbc.ClientDriver
karaf.lock.jdbc.user=root
karaf.lock.jdbc.password=
karaf.lock.jdbc.table=KARAF_LOCK
karaf.lock.jdbc.clustername=mycluster
karaf.lock.jdbc.timeout=30


and simply run it you'll get an OutOfMemory error printed to the console within an hour or so. It appears that the offending code is here:


/**
     * isAlive - test if lock still exists.
     */
    public boolean isAlive() throws Exception {
        if (lockConnection == null) { return false; }
        PreparedStatement statement = null;
        try {
            lockConnection.setAutoCommit(false);
            statements.init(lockConnection);
            String sql = statements.testLockTableStatus();
            statement = lockConnection.prepareStatement(sql);
            statement.execute();
        } catch (Exception ex) {
            return false;
        } 
        return true;
    }


the try/catch statement needs a finally to ensure the PreparedStatement is cleaned up:


} finally {
            if (statement != null) {
                try {
                    statement.close();
                } catch (SQLException e) {
                         //log failure here...
                }
            }
        }


We should also put some unit testing in place for this class and ensure that it doesn't leak any other resources, JDBC or otherwise as this class is used to set up master/slave HA which is an important use case to support.

Also of note is that the container level locking mechanism seems to not be honored by slave processes. In the Main#lock method if the lock is enabled we should only start up to the configured lock level, otherwise start Karaf up to the default lock level.


  was:
If you configure servicemix to use the DefaultJDBCLock like so:

{noformat}
karaf.lock=true
karaf.lock.class=org.apache.felix.karaf.main.DefaultJDBCLock
karaf.lock.level=50
karaf.lock.delay=1000
karaf.lock.jdbc.url=jdbc:mysql://localhost:3306/somedatabase
karaf.lock.jdbc.driver=org.apache.derby.jdbc.ClientDriver
karaf.lock.jdbc.user=root
karaf.lock.jdbc.password=
karaf.lock.jdbc.table=KARAF_LOCK
karaf.lock.jdbc.clustername=mycluster
karaf.lock.jdbc.timeout=30
{noformat}

and simply run it you'll get an OutOfMemory error printed to the console within an hour or so. It appears that the offending code is here:

{noformat}
/**
     * isAlive - test if lock still exists.
     */
    public boolean isAlive() throws Exception {
        if (lockConnection == null) { return false; }
        PreparedStatement statement = null;
        try {
            lockConnection.setAutoCommit(false);
            statements.init(lockConnection);
            String sql = statements.testLockTableStatus();
            statement = lockConnection.prepareStatement(sql);
            statement.execute();
        } catch (Exception ex) {
            return false;
        } 
        return true;
    }
{noformat}

the try/catch statement needs a finally to ensure the PreparedStatement is cleaned up:

{noformat}
} finally {
            if (statement != null) {
                try {
                    statement.close();
                } catch (SQLException e) {
                         //log failure here...
                }
            }
        }
{noformat}

We should also put some unit testing in place for this class and ensure that it doesn't leak any other resources, JDBC or otherwise as this class is used to set up master/slave HA which is an important use case to support.

Also of note is that the container level locking mechanism seems to not be honored by slave processes. In the Main#lock method if the lock is enabled we should only start up to the configured lock level, otherwise start Karaf up to the default lock level.



> KARAF: Locking error in DefaultJDBCLock, also contains an eventual OutOfMemory error on locked processes.
> ---------------------------------------------------------------------------------------------------------
>
>                 Key: FELIX-1192
>                 URL: https://issues.apache.org/jira/browse/FELIX-1192
>             Project: Felix
>          Issue Type: Bug
>          Components: Karaf
>    Affects Versions: karaf-1.0.0
>         Environment: All
>            Reporter: Jamie goodyear
>
> If you configure servicemix to use the DefaultJDBCLock like so:
> karaf.lock=true
> karaf.lock.class=org.apache.felix.karaf.main.DefaultJDBCLock
> karaf.lock.level=50
> karaf.lock.delay=1000
> karaf.lock.jdbc.url=jdbc:mysql://localhost:3306/somedatabase
> karaf.lock.jdbc.driver=org.apache.derby.jdbc.ClientDriver
> karaf.lock.jdbc.user=root
> karaf.lock.jdbc.password=
> karaf.lock.jdbc.table=KARAF_LOCK
> karaf.lock.jdbc.clustername=mycluster
> karaf.lock.jdbc.timeout=30
> and simply run it you'll get an OutOfMemory error printed to the console within an hour or so. It appears that the offending code is here:
> /**
>      * isAlive - test if lock still exists.
>      */
>     public boolean isAlive() throws Exception {
>         if (lockConnection == null) { return false; }
>         PreparedStatement statement = null;
>         try {
>             lockConnection.setAutoCommit(false);
>             statements.init(lockConnection);
>             String sql = statements.testLockTableStatus();
>             statement = lockConnection.prepareStatement(sql);
>             statement.execute();
>         } catch (Exception ex) {
>             return false;
>         } 
>         return true;
>     }
> the try/catch statement needs a finally to ensure the PreparedStatement is cleaned up:
> } finally {
>             if (statement != null) {
>                 try {
>                     statement.close();
>                 } catch (SQLException e) {
>                          //log failure here...
>                 }
>             }
>         }
> We should also put some unit testing in place for this class and ensure that it doesn't leak any other resources, JDBC or otherwise as this class is used to set up master/slave HA which is an important use case to support.
> Also of note is that the container level locking mechanism seems to not be honored by slave processes. In the Main#lock method if the lock is enabled we should only start up to the configured lock level, otherwise start Karaf up to the default lock level.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.