You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-dev@hadoop.apache.org by "Todd Lipcon (JIRA)" <ji...@apache.org> on 2009/04/08 07:03:12 UTC

[jira] Created: (HADOOP-5640) Allow ServicePlugins to hook callbacks into key service events

Allow ServicePlugins to hook callbacks into key service events
--------------------------------------------------------------

                 Key: HADOOP-5640
                 URL: https://issues.apache.org/jira/browse/HADOOP-5640
             Project: Hadoop Core
          Issue Type: Improvement
          Components: util
            Reporter: Todd Lipcon


HADOOP-5257 added the ability for NameNode and DataNode to start and stop ServicePlugin implementations at NN/DN start/stop. However, this is insufficient integration for some common use cases.

We should add some functionality for Plugins to subscribe to events generated by the service they're plugging into. Some potential hook points are:

NameNode:
  - new datanode registered
  - datanode has died
  - exception caught
  - etc?

DataNode:
  - startup
  - initial registration with NN complete (this is important for HADOOP-4707 to sync up datanode.dnRegistration.name with the NN-side registration)
  - namenode reconnect
  - some block transfer hooks?
  - exception caught

I see two potential routes for implementation:

1) We make an enum for the types of hookpoints and have a general function in the ServicePlugin interface. Something like:

{code:java}
enum HookPoint {
  DN_STARTUP,
  DN_RECEIVED_NEW_BLOCK,
  DN_CAUGHT_EXCEPTION,
 ...
}

void runHook(HookPoint hp, Object value);
{code}

2) We make classes specific to each "pluggable" as was originally suggested in HADDOP-5257. Something like:

{code:java}
class DataNodePlugin {
  void datanodeStarted() {}
  void receivedNewBlock(block info, etc) {}
  void caughtException(Exception e) {}
  ...
}
{code}

I personally prefer option (2) since we can ensure plugin API compatibility at compile-time, and we avoid an ugly switch statement in a runHook() function.

Interested to hear what people's thoughts are here.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


[jira] Updated: (HADOOP-5640) Allow ServicePlugins to hook callbacks into key service events

Posted by "Todd Lipcon (JIRA)" <ji...@apache.org>.
     [ https://issues.apache.org/jira/browse/HADOOP-5640?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Todd Lipcon updated HADOOP-5640:
--------------------------------

    Attachment: hadoop-5640.v3.txt

Unsurprisingly the patch fell out of date against trunk. This is the same patch, rebased. Its tests still pass.

> Allow ServicePlugins to hook callbacks into key service events
> --------------------------------------------------------------
>
>                 Key: HADOOP-5640
>                 URL: https://issues.apache.org/jira/browse/HADOOP-5640
>             Project: Hadoop Core
>          Issue Type: Improvement
>          Components: util
>            Reporter: Todd Lipcon
>            Assignee: Todd Lipcon
>         Attachments: hadoop-5640.txt, HADOOP-5640.v2.txt, hadoop-5640.v3.txt
>
>
> HADOOP-5257 added the ability for NameNode and DataNode to start and stop ServicePlugin implementations at NN/DN start/stop. However, this is insufficient integration for some common use cases.
> We should add some functionality for Plugins to subscribe to events generated by the service they're plugging into. Some potential hook points are:
> NameNode:
>   - new datanode registered
>   - datanode has died
>   - exception caught
>   - etc?
> DataNode:
>   - startup
>   - initial registration with NN complete (this is important for HADOOP-4707 to sync up datanode.dnRegistration.name with the NN-side registration)
>   - namenode reconnect
>   - some block transfer hooks?
>   - exception caught
> I see two potential routes for implementation:
> 1) We make an enum for the types of hookpoints and have a general function in the ServicePlugin interface. Something like:
> {code:java}
> enum HookPoint {
>   DN_STARTUP,
>   DN_RECEIVED_NEW_BLOCK,
>   DN_CAUGHT_EXCEPTION,
>  ...
> }
> void runHook(HookPoint hp, Object value);
> {code}
> 2) We make classes specific to each "pluggable" as was originally suggested in HADDOP-5257. Something like:
> {code:java}
> class DataNodePlugin {
>   void datanodeStarted() {}
>   void receivedNewBlock(block info, etc) {}
>   void caughtException(Exception e) {}
>   ...
> }
> {code}
> I personally prefer option (2) since we can ensure plugin API compatibility at compile-time, and we avoid an ugly switch statement in a runHook() function.
> Interested to hear what people's thoughts are here.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


[jira] Commented: (HADOOP-5640) Allow ServicePlugins to hook callbacks into key service events

Posted by "dhruba borthakur (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/HADOOP-5640?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12697161#action_12697161 ] 

dhruba borthakur commented on HADOOP-5640:
------------------------------------------

I am slightly worried that making the datanode/namenode invoke plug-ins calls synchronously at many different places introduces code complexity and may also cause deadlocks depending on how the plug-ins are implemented. A plug-in implementor has to know the finer-grain details about how the namenode/datanode code pieces to write plugins that behave well without impacting namenode/datanode performance.

Another approach would be to come up with a asynchronous publish/subscribe kind of model. The namenode/datanode could write data to this channel without waiting for the consumer(s) to pick it up. It could be similar to a file-change-log, but will also contain internal state changes of dfs modules. Thoughts?

> Allow ServicePlugins to hook callbacks into key service events
> --------------------------------------------------------------
>
>                 Key: HADOOP-5640
>                 URL: https://issues.apache.org/jira/browse/HADOOP-5640
>             Project: Hadoop Core
>          Issue Type: Improvement
>          Components: util
>            Reporter: Todd Lipcon
>
> HADOOP-5257 added the ability for NameNode and DataNode to start and stop ServicePlugin implementations at NN/DN start/stop. However, this is insufficient integration for some common use cases.
> We should add some functionality for Plugins to subscribe to events generated by the service they're plugging into. Some potential hook points are:
> NameNode:
>   - new datanode registered
>   - datanode has died
>   - exception caught
>   - etc?
> DataNode:
>   - startup
>   - initial registration with NN complete (this is important for HADOOP-4707 to sync up datanode.dnRegistration.name with the NN-side registration)
>   - namenode reconnect
>   - some block transfer hooks?
>   - exception caught
> I see two potential routes for implementation:
> 1) We make an enum for the types of hookpoints and have a general function in the ServicePlugin interface. Something like:
> {code:java}
> enum HookPoint {
>   DN_STARTUP,
>   DN_RECEIVED_NEW_BLOCK,
>   DN_CAUGHT_EXCEPTION,
>  ...
> }
> void runHook(HookPoint hp, Object value);
> {code}
> 2) We make classes specific to each "pluggable" as was originally suggested in HADDOP-5257. Something like:
> {code:java}
> class DataNodePlugin {
>   void datanodeStarted() {}
>   void receivedNewBlock(block info, etc) {}
>   void caughtException(Exception e) {}
>   ...
> }
> {code}
> I personally prefer option (2) since we can ensure plugin API compatibility at compile-time, and we avoid an ugly switch statement in a runHook() function.
> Interested to hear what people's thoughts are here.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


[jira] Commented: (HADOOP-5640) Allow ServicePlugins to hook callbacks into key service events

Posted by "Todd Lipcon (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/HADOOP-5640?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12715607#action_12715607 ] 

Todd Lipcon commented on HADOOP-5640:
-------------------------------------

Thanks for the review, Dhruba. I imagine that once this is committed we can start moving forward on some other hook points. Your fsck hook is a good example of how I envision this being useful - another is hooks for various JobTracker events (eg job completion, task completion, etc).

> Allow ServicePlugins to hook callbacks into key service events
> --------------------------------------------------------------
>
>                 Key: HADOOP-5640
>                 URL: https://issues.apache.org/jira/browse/HADOOP-5640
>             Project: Hadoop Core
>          Issue Type: Improvement
>          Components: util
>            Reporter: Todd Lipcon
>            Assignee: Todd Lipcon
>         Attachments: hadoop-5640.txt, HADOOP-5640.v2.txt, hadoop-5640.v3.txt
>
>
> HADOOP-5257 added the ability for NameNode and DataNode to start and stop ServicePlugin implementations at NN/DN start/stop. However, this is insufficient integration for some common use cases.
> We should add some functionality for Plugins to subscribe to events generated by the service they're plugging into. Some potential hook points are:
> NameNode:
>   - new datanode registered
>   - datanode has died
>   - exception caught
>   - etc?
> DataNode:
>   - startup
>   - initial registration with NN complete (this is important for HADOOP-4707 to sync up datanode.dnRegistration.name with the NN-side registration)
>   - namenode reconnect
>   - some block transfer hooks?
>   - exception caught
> I see two potential routes for implementation:
> 1) We make an enum for the types of hookpoints and have a general function in the ServicePlugin interface. Something like:
> {code:java}
> enum HookPoint {
>   DN_STARTUP,
>   DN_RECEIVED_NEW_BLOCK,
>   DN_CAUGHT_EXCEPTION,
>  ...
> }
> void runHook(HookPoint hp, Object value);
> {code}
> 2) We make classes specific to each "pluggable" as was originally suggested in HADDOP-5257. Something like:
> {code:java}
> class DataNodePlugin {
>   void datanodeStarted() {}
>   void receivedNewBlock(block info, etc) {}
>   void caughtException(Exception e) {}
>   ...
> }
> {code}
> I personally prefer option (2) since we can ensure plugin API compatibility at compile-time, and we avoid an ugly switch statement in a runHook() function.
> Interested to hear what people's thoughts are here.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


[jira] Updated: (HADOOP-5640) Allow ServicePlugins to hook callbacks into key service events

Posted by "Tom White (JIRA)" <ji...@apache.org>.
     [ https://issues.apache.org/jira/browse/HADOOP-5640?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Tom White updated HADOOP-5640:
------------------------------

    Status: Open  (was: Patch Available)

> Allow ServicePlugins to hook callbacks into key service events
> --------------------------------------------------------------
>
>                 Key: HADOOP-5640
>                 URL: https://issues.apache.org/jira/browse/HADOOP-5640
>             Project: Hadoop Core
>          Issue Type: Improvement
>          Components: util
>            Reporter: Todd Lipcon
>            Assignee: Todd Lipcon
>         Attachments: hadoop-5640.txt, HADOOP-5640.v2.txt, hadoop-5640.v3.txt
>
>
> HADOOP-5257 added the ability for NameNode and DataNode to start and stop ServicePlugin implementations at NN/DN start/stop. However, this is insufficient integration for some common use cases.
> We should add some functionality for Plugins to subscribe to events generated by the service they're plugging into. Some potential hook points are:
> NameNode:
>   - new datanode registered
>   - datanode has died
>   - exception caught
>   - etc?
> DataNode:
>   - startup
>   - initial registration with NN complete (this is important for HADOOP-4707 to sync up datanode.dnRegistration.name with the NN-side registration)
>   - namenode reconnect
>   - some block transfer hooks?
>   - exception caught
> I see two potential routes for implementation:
> 1) We make an enum for the types of hookpoints and have a general function in the ServicePlugin interface. Something like:
> {code:java}
> enum HookPoint {
>   DN_STARTUP,
>   DN_RECEIVED_NEW_BLOCK,
>   DN_CAUGHT_EXCEPTION,
>  ...
> }
> void runHook(HookPoint hp, Object value);
> {code}
> 2) We make classes specific to each "pluggable" as was originally suggested in HADDOP-5257. Something like:
> {code:java}
> class DataNodePlugin {
>   void datanodeStarted() {}
>   void receivedNewBlock(block info, etc) {}
>   void caughtException(Exception e) {}
>   ...
> }
> {code}
> I personally prefer option (2) since we can ensure plugin API compatibility at compile-time, and we avoid an ugly switch statement in a runHook() function.
> Interested to hear what people's thoughts are here.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


[jira] Updated: (HADOOP-5640) Allow ServicePlugins to hook callbacks into key service events

Posted by "Todd Lipcon (JIRA)" <ji...@apache.org>.
     [ https://issues.apache.org/jira/browse/HADOOP-5640?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Todd Lipcon updated HADOOP-5640:
--------------------------------

    Status: Open  (was: Patch Available)

> Allow ServicePlugins to hook callbacks into key service events
> --------------------------------------------------------------
>
>                 Key: HADOOP-5640
>                 URL: https://issues.apache.org/jira/browse/HADOOP-5640
>             Project: Hadoop Core
>          Issue Type: Improvement
>          Components: util
>            Reporter: Todd Lipcon
>            Assignee: Todd Lipcon
>         Attachments: hadoop-5640.txt, HADOOP-5640.v2.txt, hadoop-5640.v3.txt
>
>
> HADOOP-5257 added the ability for NameNode and DataNode to start and stop ServicePlugin implementations at NN/DN start/stop. However, this is insufficient integration for some common use cases.
> We should add some functionality for Plugins to subscribe to events generated by the service they're plugging into. Some potential hook points are:
> NameNode:
>   - new datanode registered
>   - datanode has died
>   - exception caught
>   - etc?
> DataNode:
>   - startup
>   - initial registration with NN complete (this is important for HADOOP-4707 to sync up datanode.dnRegistration.name with the NN-side registration)
>   - namenode reconnect
>   - some block transfer hooks?
>   - exception caught
> I see two potential routes for implementation:
> 1) We make an enum for the types of hookpoints and have a general function in the ServicePlugin interface. Something like:
> {code:java}
> enum HookPoint {
>   DN_STARTUP,
>   DN_RECEIVED_NEW_BLOCK,
>   DN_CAUGHT_EXCEPTION,
>  ...
> }
> void runHook(HookPoint hp, Object value);
> {code}
> 2) We make classes specific to each "pluggable" as was originally suggested in HADDOP-5257. Something like:
> {code:java}
> class DataNodePlugin {
>   void datanodeStarted() {}
>   void receivedNewBlock(block info, etc) {}
>   void caughtException(Exception e) {}
>   ...
> }
> {code}
> I personally prefer option (2) since we can ensure plugin API compatibility at compile-time, and we avoid an ugly switch statement in a runHook() function.
> Interested to hear what people's thoughts are here.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


[jira] Commented: (HADOOP-5640) Allow ServicePlugins to hook callbacks into key service events

Posted by "Tom White (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/HADOOP-5640?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12721774#action_12721774 ] 

Tom White commented on HADOOP-5640:
-----------------------------------

Overall +1. A few comments:

* The plugin design is actually the listener pattern. For API extensibility it might be worth considering a ServiceEvent class that is passed to the methods of ServicePlugin and subclasses so that extra context information can be passed without breaking plugins. Also, ServicePlugin should be an abstract class.
* Rename PluginDispatcher to ServicePluginDispatcher. There are other plugins (e.g. MemoryCalculatorPlugin) so it's worth being more precise.
* SingleArgumentRunnable might be named ServicePluginCallback to better indicate its use.

BTW the latest patch doesn't apply; however it will need regenerating anyway after the project split.

> Allow ServicePlugins to hook callbacks into key service events
> --------------------------------------------------------------
>
>                 Key: HADOOP-5640
>                 URL: https://issues.apache.org/jira/browse/HADOOP-5640
>             Project: Hadoop Core
>          Issue Type: Improvement
>          Components: util
>            Reporter: Todd Lipcon
>            Assignee: Todd Lipcon
>         Attachments: hadoop-5640.txt, HADOOP-5640.v2.txt, hadoop-5640.v3.txt
>
>
> HADOOP-5257 added the ability for NameNode and DataNode to start and stop ServicePlugin implementations at NN/DN start/stop. However, this is insufficient integration for some common use cases.
> We should add some functionality for Plugins to subscribe to events generated by the service they're plugging into. Some potential hook points are:
> NameNode:
>   - new datanode registered
>   - datanode has died
>   - exception caught
>   - etc?
> DataNode:
>   - startup
>   - initial registration with NN complete (this is important for HADOOP-4707 to sync up datanode.dnRegistration.name with the NN-side registration)
>   - namenode reconnect
>   - some block transfer hooks?
>   - exception caught
> I see two potential routes for implementation:
> 1) We make an enum for the types of hookpoints and have a general function in the ServicePlugin interface. Something like:
> {code:java}
> enum HookPoint {
>   DN_STARTUP,
>   DN_RECEIVED_NEW_BLOCK,
>   DN_CAUGHT_EXCEPTION,
>  ...
> }
> void runHook(HookPoint hp, Object value);
> {code}
> 2) We make classes specific to each "pluggable" as was originally suggested in HADDOP-5257. Something like:
> {code:java}
> class DataNodePlugin {
>   void datanodeStarted() {}
>   void receivedNewBlock(block info, etc) {}
>   void caughtException(Exception e) {}
>   ...
> }
> {code}
> I personally prefer option (2) since we can ensure plugin API compatibility at compile-time, and we avoid an ugly switch statement in a runHook() function.
> Interested to hear what people's thoughts are here.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


[jira] Issue Comment Edited: (HADOOP-5640) Allow ServicePlugins to hook callbacks into key service events

Posted by "dhruba borthakur (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/HADOOP-5640?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12698322#action_12698322 ] 

dhruba borthakur edited comment on HADOOP-5640 at 4/12/09 9:44 PM:
-------------------------------------------------------------------

Another option would be to copy the associated data into the event data structure itself. In that case, the pluging does not have to inspect any namenode/datanode data structures, instead all the information that it needs is encapsulated in the event. Is this feasible?

      was (Author: dhruba):
    Another option would be to cop[y the associated data into the event data structure itself. In that case, the pluging does not have to ispect any namenode/datanode data structures, instead all the information that it neds is encapsulated in the event. Is this feasible?
  
> Allow ServicePlugins to hook callbacks into key service events
> --------------------------------------------------------------
>
>                 Key: HADOOP-5640
>                 URL: https://issues.apache.org/jira/browse/HADOOP-5640
>             Project: Hadoop Core
>          Issue Type: Improvement
>          Components: util
>            Reporter: Todd Lipcon
>
> HADOOP-5257 added the ability for NameNode and DataNode to start and stop ServicePlugin implementations at NN/DN start/stop. However, this is insufficient integration for some common use cases.
> We should add some functionality for Plugins to subscribe to events generated by the service they're plugging into. Some potential hook points are:
> NameNode:
>   - new datanode registered
>   - datanode has died
>   - exception caught
>   - etc?
> DataNode:
>   - startup
>   - initial registration with NN complete (this is important for HADOOP-4707 to sync up datanode.dnRegistration.name with the NN-side registration)
>   - namenode reconnect
>   - some block transfer hooks?
>   - exception caught
> I see two potential routes for implementation:
> 1) We make an enum for the types of hookpoints and have a general function in the ServicePlugin interface. Something like:
> {code:java}
> enum HookPoint {
>   DN_STARTUP,
>   DN_RECEIVED_NEW_BLOCK,
>   DN_CAUGHT_EXCEPTION,
>  ...
> }
> void runHook(HookPoint hp, Object value);
> {code}
> 2) We make classes specific to each "pluggable" as was originally suggested in HADDOP-5257. Something like:
> {code:java}
> class DataNodePlugin {
>   void datanodeStarted() {}
>   void receivedNewBlock(block info, etc) {}
>   void caughtException(Exception e) {}
>   ...
> }
> {code}
> I personally prefer option (2) since we can ensure plugin API compatibility at compile-time, and we avoid an ugly switch statement in a runHook() function.
> Interested to hear what people's thoughts are here.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


[jira] Updated: (HADOOP-5640) Allow ServicePlugins to hook callbacks into key service events

Posted by "Todd Lipcon (JIRA)" <ji...@apache.org>.
     [ https://issues.apache.org/jira/browse/HADOOP-5640?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Todd Lipcon updated HADOOP-5640:
--------------------------------

    Attachment: hadoop-5640.txt

> Allow ServicePlugins to hook callbacks into key service events
> --------------------------------------------------------------
>
>                 Key: HADOOP-5640
>                 URL: https://issues.apache.org/jira/browse/HADOOP-5640
>             Project: Hadoop Core
>          Issue Type: Improvement
>          Components: util
>            Reporter: Todd Lipcon
>            Assignee: Todd Lipcon
>         Attachments: hadoop-5640.txt
>
>
> HADOOP-5257 added the ability for NameNode and DataNode to start and stop ServicePlugin implementations at NN/DN start/stop. However, this is insufficient integration for some common use cases.
> We should add some functionality for Plugins to subscribe to events generated by the service they're plugging into. Some potential hook points are:
> NameNode:
>   - new datanode registered
>   - datanode has died
>   - exception caught
>   - etc?
> DataNode:
>   - startup
>   - initial registration with NN complete (this is important for HADOOP-4707 to sync up datanode.dnRegistration.name with the NN-side registration)
>   - namenode reconnect
>   - some block transfer hooks?
>   - exception caught
> I see two potential routes for implementation:
> 1) We make an enum for the types of hookpoints and have a general function in the ServicePlugin interface. Something like:
> {code:java}
> enum HookPoint {
>   DN_STARTUP,
>   DN_RECEIVED_NEW_BLOCK,
>   DN_CAUGHT_EXCEPTION,
>  ...
> }
> void runHook(HookPoint hp, Object value);
> {code}
> 2) We make classes specific to each "pluggable" as was originally suggested in HADDOP-5257. Something like:
> {code:java}
> class DataNodePlugin {
>   void datanodeStarted() {}
>   void receivedNewBlock(block info, etc) {}
>   void caughtException(Exception e) {}
>   ...
> }
> {code}
> I personally prefer option (2) since we can ensure plugin API compatibility at compile-time, and we avoid an ugly switch statement in a runHook() function.
> Interested to hear what people's thoughts are here.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


[jira] Commented: (HADOOP-5640) Allow ServicePlugins to hook callbacks into key service events

Posted by "dhruba borthakur (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/HADOOP-5640?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12715388#action_12715388 ] 

dhruba borthakur commented on HADOOP-5640:
------------------------------------------

+1 Code changes look good. I will wait for HadoopQA to post results of running the tests before I can commit it. 

> Allow ServicePlugins to hook callbacks into key service events
> --------------------------------------------------------------
>
>                 Key: HADOOP-5640
>                 URL: https://issues.apache.org/jira/browse/HADOOP-5640
>             Project: Hadoop Core
>          Issue Type: Improvement
>          Components: util
>            Reporter: Todd Lipcon
>            Assignee: Todd Lipcon
>         Attachments: hadoop-5640.txt, HADOOP-5640.v2.txt, hadoop-5640.v3.txt
>
>
> HADOOP-5257 added the ability for NameNode and DataNode to start and stop ServicePlugin implementations at NN/DN start/stop. However, this is insufficient integration for some common use cases.
> We should add some functionality for Plugins to subscribe to events generated by the service they're plugging into. Some potential hook points are:
> NameNode:
>   - new datanode registered
>   - datanode has died
>   - exception caught
>   - etc?
> DataNode:
>   - startup
>   - initial registration with NN complete (this is important for HADOOP-4707 to sync up datanode.dnRegistration.name with the NN-side registration)
>   - namenode reconnect
>   - some block transfer hooks?
>   - exception caught
> I see two potential routes for implementation:
> 1) We make an enum for the types of hookpoints and have a general function in the ServicePlugin interface. Something like:
> {code:java}
> enum HookPoint {
>   DN_STARTUP,
>   DN_RECEIVED_NEW_BLOCK,
>   DN_CAUGHT_EXCEPTION,
>  ...
> }
> void runHook(HookPoint hp, Object value);
> {code}
> 2) We make classes specific to each "pluggable" as was originally suggested in HADDOP-5257. Something like:
> {code:java}
> class DataNodePlugin {
>   void datanodeStarted() {}
>   void receivedNewBlock(block info, etc) {}
>   void caughtException(Exception e) {}
>   ...
> }
> {code}
> I personally prefer option (2) since we can ensure plugin API compatibility at compile-time, and we avoid an ugly switch statement in a runHook() function.
> Interested to hear what people's thoughts are here.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


[jira] Commented: (HADOOP-5640) Allow ServicePlugins to hook callbacks into key service events

Posted by "Hadoop QA (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/HADOOP-5640?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12700800#action_12700800 ] 

Hadoop QA commented on HADOOP-5640:
-----------------------------------

+1 overall.  Here are the results of testing the latest attachment 
  http://issues.apache.org/jira/secure/attachment/12405720/HADOOP-5640.v2.txt
  against trunk revision 766190.

    +1 @author.  The patch does not contain any @author tags.

    +1 tests included.  The patch appears to include 2 new or modified tests.

    +1 javadoc.  The javadoc tool did not generate any warning messages.

    +1 javac.  The applied patch does not increase the total number of javac compiler warnings.

    +1 findbugs.  The patch does not introduce any new Findbugs warnings.

    +1 Eclipse classpath. The patch retains Eclipse classpath integrity.

    +1 release audit.  The applied patch does not increase the total number of release audit warnings.

    +1 core tests.  The patch passed core unit tests.

    +1 contrib tests.  The patch passed contrib unit tests.

Test results: http://hudson.zones.apache.org/hudson/job/Hadoop-Patch-vesta.apache.org/213/testReport/
Findbugs warnings: http://hudson.zones.apache.org/hudson/job/Hadoop-Patch-vesta.apache.org/213/artifact/trunk/build/test/findbugs/newPatchFindbugsWarnings.html
Checkstyle results: http://hudson.zones.apache.org/hudson/job/Hadoop-Patch-vesta.apache.org/213/artifact/trunk/build/test/checkstyle-errors.html
Console output: http://hudson.zones.apache.org/hudson/job/Hadoop-Patch-vesta.apache.org/213/console

This message is automatically generated.

> Allow ServicePlugins to hook callbacks into key service events
> --------------------------------------------------------------
>
>                 Key: HADOOP-5640
>                 URL: https://issues.apache.org/jira/browse/HADOOP-5640
>             Project: Hadoop Core
>          Issue Type: Improvement
>          Components: util
>            Reporter: Todd Lipcon
>            Assignee: Todd Lipcon
>         Attachments: hadoop-5640.txt, HADOOP-5640.v2.txt
>
>
> HADOOP-5257 added the ability for NameNode and DataNode to start and stop ServicePlugin implementations at NN/DN start/stop. However, this is insufficient integration for some common use cases.
> We should add some functionality for Plugins to subscribe to events generated by the service they're plugging into. Some potential hook points are:
> NameNode:
>   - new datanode registered
>   - datanode has died
>   - exception caught
>   - etc?
> DataNode:
>   - startup
>   - initial registration with NN complete (this is important for HADOOP-4707 to sync up datanode.dnRegistration.name with the NN-side registration)
>   - namenode reconnect
>   - some block transfer hooks?
>   - exception caught
> I see two potential routes for implementation:
> 1) We make an enum for the types of hookpoints and have a general function in the ServicePlugin interface. Something like:
> {code:java}
> enum HookPoint {
>   DN_STARTUP,
>   DN_RECEIVED_NEW_BLOCK,
>   DN_CAUGHT_EXCEPTION,
>  ...
> }
> void runHook(HookPoint hp, Object value);
> {code}
> 2) We make classes specific to each "pluggable" as was originally suggested in HADDOP-5257. Something like:
> {code:java}
> class DataNodePlugin {
>   void datanodeStarted() {}
>   void receivedNewBlock(block info, etc) {}
>   void caughtException(Exception e) {}
>   ...
> }
> {code}
> I personally prefer option (2) since we can ensure plugin API compatibility at compile-time, and we avoid an ugly switch statement in a runHook() function.
> Interested to hear what people's thoughts are here.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


[jira] Updated: (HADOOP-5640) Allow ServicePlugins to hook callbacks into key service events

Posted by "Todd Lipcon (JIRA)" <ji...@apache.org>.
     [ https://issues.apache.org/jira/browse/HADOOP-5640?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Todd Lipcon updated HADOOP-5640:
--------------------------------

    Attachment: HADOOP-5640.v2.txt

> Allow ServicePlugins to hook callbacks into key service events
> --------------------------------------------------------------
>
>                 Key: HADOOP-5640
>                 URL: https://issues.apache.org/jira/browse/HADOOP-5640
>             Project: Hadoop Core
>          Issue Type: Improvement
>          Components: util
>            Reporter: Todd Lipcon
>            Assignee: Todd Lipcon
>         Attachments: hadoop-5640.txt, HADOOP-5640.v2.txt
>
>
> HADOOP-5257 added the ability for NameNode and DataNode to start and stop ServicePlugin implementations at NN/DN start/stop. However, this is insufficient integration for some common use cases.
> We should add some functionality for Plugins to subscribe to events generated by the service they're plugging into. Some potential hook points are:
> NameNode:
>   - new datanode registered
>   - datanode has died
>   - exception caught
>   - etc?
> DataNode:
>   - startup
>   - initial registration with NN complete (this is important for HADOOP-4707 to sync up datanode.dnRegistration.name with the NN-side registration)
>   - namenode reconnect
>   - some block transfer hooks?
>   - exception caught
> I see two potential routes for implementation:
> 1) We make an enum for the types of hookpoints and have a general function in the ServicePlugin interface. Something like:
> {code:java}
> enum HookPoint {
>   DN_STARTUP,
>   DN_RECEIVED_NEW_BLOCK,
>   DN_CAUGHT_EXCEPTION,
>  ...
> }
> void runHook(HookPoint hp, Object value);
> {code}
> 2) We make classes specific to each "pluggable" as was originally suggested in HADDOP-5257. Something like:
> {code:java}
> class DataNodePlugin {
>   void datanodeStarted() {}
>   void receivedNewBlock(block info, etc) {}
>   void caughtException(Exception e) {}
>   ...
> }
> {code}
> I personally prefer option (2) since we can ensure plugin API compatibility at compile-time, and we avoid an ugly switch statement in a runHook() function.
> Interested to hear what people's thoughts are here.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


[jira] Commented: (HADOOP-5640) Allow ServicePlugins to hook callbacks into key service events

Posted by "Todd Lipcon (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/HADOOP-5640?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12716149#action_12716149 ] 

Todd Lipcon commented on HADOOP-5640:
-------------------------------------

Failed test is capacity scheduler.

> Allow ServicePlugins to hook callbacks into key service events
> --------------------------------------------------------------
>
>                 Key: HADOOP-5640
>                 URL: https://issues.apache.org/jira/browse/HADOOP-5640
>             Project: Hadoop Core
>          Issue Type: Improvement
>          Components: util
>            Reporter: Todd Lipcon
>            Assignee: Todd Lipcon
>         Attachments: hadoop-5640.txt, HADOOP-5640.v2.txt, hadoop-5640.v3.txt
>
>
> HADOOP-5257 added the ability for NameNode and DataNode to start and stop ServicePlugin implementations at NN/DN start/stop. However, this is insufficient integration for some common use cases.
> We should add some functionality for Plugins to subscribe to events generated by the service they're plugging into. Some potential hook points are:
> NameNode:
>   - new datanode registered
>   - datanode has died
>   - exception caught
>   - etc?
> DataNode:
>   - startup
>   - initial registration with NN complete (this is important for HADOOP-4707 to sync up datanode.dnRegistration.name with the NN-side registration)
>   - namenode reconnect
>   - some block transfer hooks?
>   - exception caught
> I see two potential routes for implementation:
> 1) We make an enum for the types of hookpoints and have a general function in the ServicePlugin interface. Something like:
> {code:java}
> enum HookPoint {
>   DN_STARTUP,
>   DN_RECEIVED_NEW_BLOCK,
>   DN_CAUGHT_EXCEPTION,
>  ...
> }
> void runHook(HookPoint hp, Object value);
> {code}
> 2) We make classes specific to each "pluggable" as was originally suggested in HADDOP-5257. Something like:
> {code:java}
> class DataNodePlugin {
>   void datanodeStarted() {}
>   void receivedNewBlock(block info, etc) {}
>   void caughtException(Exception e) {}
>   ...
> }
> {code}
> I personally prefer option (2) since we can ensure plugin API compatibility at compile-time, and we avoid an ugly switch statement in a runHook() function.
> Interested to hear what people's thoughts are here.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


[jira] Commented: (HADOOP-5640) Allow ServicePlugins to hook callbacks into key service events

Posted by "Carlos Valiente (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/HADOOP-5640?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12696943#action_12696943 ] 

Carlos Valiente commented on HADOOP-5640:
-----------------------------------------

I'd go with option 2, making {{NamenodePlugin}} and {{DatanodePlugin}} implement {{ServicePlugin}}, but being loaded as instances of their classes:

{code}
  // For the namenode:
  List<NamenodePlugin> plugins = conf.getInstances("dfs.namenode.plugins", NamenodePlugin.class);
{code}

{quote}
DataNode:
 * namenode reconnect
{quote}

Do datanodes get notified on that kind of events? That would be the way to solve our problem in HADOOP-4707, when the namenode is restarted.

> Allow ServicePlugins to hook callbacks into key service events
> --------------------------------------------------------------
>
>                 Key: HADOOP-5640
>                 URL: https://issues.apache.org/jira/browse/HADOOP-5640
>             Project: Hadoop Core
>          Issue Type: Improvement
>          Components: util
>            Reporter: Todd Lipcon
>
> HADOOP-5257 added the ability for NameNode and DataNode to start and stop ServicePlugin implementations at NN/DN start/stop. However, this is insufficient integration for some common use cases.
> We should add some functionality for Plugins to subscribe to events generated by the service they're plugging into. Some potential hook points are:
> NameNode:
>   - new datanode registered
>   - datanode has died
>   - exception caught
>   - etc?
> DataNode:
>   - startup
>   - initial registration with NN complete (this is important for HADOOP-4707 to sync up datanode.dnRegistration.name with the NN-side registration)
>   - namenode reconnect
>   - some block transfer hooks?
>   - exception caught
> I see two potential routes for implementation:
> 1) We make an enum for the types of hookpoints and have a general function in the ServicePlugin interface. Something like:
> {code:java}
> enum HookPoint {
>   DN_STARTUP,
>   DN_RECEIVED_NEW_BLOCK,
>   DN_CAUGHT_EXCEPTION,
>  ...
> }
> void runHook(HookPoint hp, Object value);
> {code}
> 2) We make classes specific to each "pluggable" as was originally suggested in HADDOP-5257. Something like:
> {code:java}
> class DataNodePlugin {
>   void datanodeStarted() {}
>   void receivedNewBlock(block info, etc) {}
>   void caughtException(Exception e) {}
>   ...
> }
> {code}
> I personally prefer option (2) since we can ensure plugin API compatibility at compile-time, and we avoid an ugly switch statement in a runHook() function.
> Interested to hear what people's thoughts are here.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


[jira] Commented: (HADOOP-5640) Allow ServicePlugins to hook callbacks into key service events

Posted by "Todd Lipcon (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/HADOOP-5640?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12698303#action_12698303 ] 

Todd Lipcon commented on HADOOP-5640:
-------------------------------------

Hacked this up today, but I just realized there may be a big downside to the multithread approach. While it's nice to decouple the execution of the plugin with the execution of the service, it's likely to introduce a lot of hairy synchronization bugs. As a specific example, the Thrift datanode plugin accesses the datanode.dnRegistration member without synchronization,and the datanode potentially modifies this instance when re-registering with the namenode after a disconnection. My bet is that there other more subtle instances where this is the case.

Given this, I think I'd prefer to say "plugin authors should take care that they do not block or create potential deadlock situations with their host services. Install plugins at your own risk" rather than force them to execute in separate threads everywhere.

The other option is to look over all public members (and those available through accessors) to be sure that anything allowed to escape out of the daemon is an immutable snapshot of the internal state.

> Allow ServicePlugins to hook callbacks into key service events
> --------------------------------------------------------------
>
>                 Key: HADOOP-5640
>                 URL: https://issues.apache.org/jira/browse/HADOOP-5640
>             Project: Hadoop Core
>          Issue Type: Improvement
>          Components: util
>            Reporter: Todd Lipcon
>
> HADOOP-5257 added the ability for NameNode and DataNode to start and stop ServicePlugin implementations at NN/DN start/stop. However, this is insufficient integration for some common use cases.
> We should add some functionality for Plugins to subscribe to events generated by the service they're plugging into. Some potential hook points are:
> NameNode:
>   - new datanode registered
>   - datanode has died
>   - exception caught
>   - etc?
> DataNode:
>   - startup
>   - initial registration with NN complete (this is important for HADOOP-4707 to sync up datanode.dnRegistration.name with the NN-side registration)
>   - namenode reconnect
>   - some block transfer hooks?
>   - exception caught
> I see two potential routes for implementation:
> 1) We make an enum for the types of hookpoints and have a general function in the ServicePlugin interface. Something like:
> {code:java}
> enum HookPoint {
>   DN_STARTUP,
>   DN_RECEIVED_NEW_BLOCK,
>   DN_CAUGHT_EXCEPTION,
>  ...
> }
> void runHook(HookPoint hp, Object value);
> {code}
> 2) We make classes specific to each "pluggable" as was originally suggested in HADDOP-5257. Something like:
> {code:java}
> class DataNodePlugin {
>   void datanodeStarted() {}
>   void receivedNewBlock(block info, etc) {}
>   void caughtException(Exception e) {}
>   ...
> }
> {code}
> I personally prefer option (2) since we can ensure plugin API compatibility at compile-time, and we avoid an ugly switch statement in a runHook() function.
> Interested to hear what people's thoughts are here.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


[jira] Updated: (HADOOP-5640) Allow ServicePlugins to hook callbacks into key service events

Posted by "Todd Lipcon (JIRA)" <ji...@apache.org>.
     [ https://issues.apache.org/jira/browse/HADOOP-5640?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Todd Lipcon updated HADOOP-5640:
--------------------------------

    Status: Patch Available  (was: Open)

> Allow ServicePlugins to hook callbacks into key service events
> --------------------------------------------------------------
>
>                 Key: HADOOP-5640
>                 URL: https://issues.apache.org/jira/browse/HADOOP-5640
>             Project: Hadoop Core
>          Issue Type: Improvement
>          Components: util
>            Reporter: Todd Lipcon
>            Assignee: Todd Lipcon
>         Attachments: hadoop-5640.txt, HADOOP-5640.v2.txt
>
>
> HADOOP-5257 added the ability for NameNode and DataNode to start and stop ServicePlugin implementations at NN/DN start/stop. However, this is insufficient integration for some common use cases.
> We should add some functionality for Plugins to subscribe to events generated by the service they're plugging into. Some potential hook points are:
> NameNode:
>   - new datanode registered
>   - datanode has died
>   - exception caught
>   - etc?
> DataNode:
>   - startup
>   - initial registration with NN complete (this is important for HADOOP-4707 to sync up datanode.dnRegistration.name with the NN-side registration)
>   - namenode reconnect
>   - some block transfer hooks?
>   - exception caught
> I see two potential routes for implementation:
> 1) We make an enum for the types of hookpoints and have a general function in the ServicePlugin interface. Something like:
> {code:java}
> enum HookPoint {
>   DN_STARTUP,
>   DN_RECEIVED_NEW_BLOCK,
>   DN_CAUGHT_EXCEPTION,
>  ...
> }
> void runHook(HookPoint hp, Object value);
> {code}
> 2) We make classes specific to each "pluggable" as was originally suggested in HADDOP-5257. Something like:
> {code:java}
> class DataNodePlugin {
>   void datanodeStarted() {}
>   void receivedNewBlock(block info, etc) {}
>   void caughtException(Exception e) {}
>   ...
> }
> {code}
> I personally prefer option (2) since we can ensure plugin API compatibility at compile-time, and we avoid an ugly switch statement in a runHook() function.
> Interested to hear what people's thoughts are here.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


[jira] Commented: (HADOOP-5640) Allow ServicePlugins to hook callbacks into key service events

Posted by "Hadoop QA (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/HADOOP-5640?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12716142#action_12716142 ] 

Hadoop QA commented on HADOOP-5640:
-----------------------------------

-1 overall.  Here are the results of testing the latest attachment 
  http://issues.apache.org/jira/secure/attachment/12408851/hadoop-5640.v3.txt
  against trunk revision 781602.

    +1 @author.  The patch does not contain any @author tags.

    +1 tests included.  The patch appears to include 2 new or modified tests.

    +1 javadoc.  The javadoc tool did not generate any warning messages.

    +1 javac.  The applied patch does not increase the total number of javac compiler warnings.

    +1 findbugs.  The patch does not introduce any new Findbugs warnings.

    +1 Eclipse classpath. The patch retains Eclipse classpath integrity.

    +1 release audit.  The applied patch does not increase the total number of release audit warnings.

    +1 core tests.  The patch passed core unit tests.

    -1 contrib tests.  The patch failed contrib unit tests.

Test results: http://hudson.zones.apache.org/hudson/job/Hadoop-Patch-vesta.apache.org/458/testReport/
Findbugs warnings: http://hudson.zones.apache.org/hudson/job/Hadoop-Patch-vesta.apache.org/458/artifact/trunk/build/test/findbugs/newPatchFindbugsWarnings.html
Checkstyle results: http://hudson.zones.apache.org/hudson/job/Hadoop-Patch-vesta.apache.org/458/artifact/trunk/build/test/checkstyle-errors.html
Console output: http://hudson.zones.apache.org/hudson/job/Hadoop-Patch-vesta.apache.org/458/console

This message is automatically generated.

> Allow ServicePlugins to hook callbacks into key service events
> --------------------------------------------------------------
>
>                 Key: HADOOP-5640
>                 URL: https://issues.apache.org/jira/browse/HADOOP-5640
>             Project: Hadoop Core
>          Issue Type: Improvement
>          Components: util
>            Reporter: Todd Lipcon
>            Assignee: Todd Lipcon
>         Attachments: hadoop-5640.txt, HADOOP-5640.v2.txt, hadoop-5640.v3.txt
>
>
> HADOOP-5257 added the ability for NameNode and DataNode to start and stop ServicePlugin implementations at NN/DN start/stop. However, this is insufficient integration for some common use cases.
> We should add some functionality for Plugins to subscribe to events generated by the service they're plugging into. Some potential hook points are:
> NameNode:
>   - new datanode registered
>   - datanode has died
>   - exception caught
>   - etc?
> DataNode:
>   - startup
>   - initial registration with NN complete (this is important for HADOOP-4707 to sync up datanode.dnRegistration.name with the NN-side registration)
>   - namenode reconnect
>   - some block transfer hooks?
>   - exception caught
> I see two potential routes for implementation:
> 1) We make an enum for the types of hookpoints and have a general function in the ServicePlugin interface. Something like:
> {code:java}
> enum HookPoint {
>   DN_STARTUP,
>   DN_RECEIVED_NEW_BLOCK,
>   DN_CAUGHT_EXCEPTION,
>  ...
> }
> void runHook(HookPoint hp, Object value);
> {code}
> 2) We make classes specific to each "pluggable" as was originally suggested in HADDOP-5257. Something like:
> {code:java}
> class DataNodePlugin {
>   void datanodeStarted() {}
>   void receivedNewBlock(block info, etc) {}
>   void caughtException(Exception e) {}
>   ...
> }
> {code}
> I personally prefer option (2) since we can ensure plugin API compatibility at compile-time, and we avoid an ugly switch statement in a runHook() function.
> Interested to hear what people's thoughts are here.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


[jira] Assigned: (HADOOP-5640) Allow ServicePlugins to hook callbacks into key service events

Posted by "Todd Lipcon (JIRA)" <ji...@apache.org>.
     [ https://issues.apache.org/jira/browse/HADOOP-5640?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Todd Lipcon reassigned HADOOP-5640:
-----------------------------------

    Assignee: Todd Lipcon

> Allow ServicePlugins to hook callbacks into key service events
> --------------------------------------------------------------
>
>                 Key: HADOOP-5640
>                 URL: https://issues.apache.org/jira/browse/HADOOP-5640
>             Project: Hadoop Core
>          Issue Type: Improvement
>          Components: util
>            Reporter: Todd Lipcon
>            Assignee: Todd Lipcon
>
> HADOOP-5257 added the ability for NameNode and DataNode to start and stop ServicePlugin implementations at NN/DN start/stop. However, this is insufficient integration for some common use cases.
> We should add some functionality for Plugins to subscribe to events generated by the service they're plugging into. Some potential hook points are:
> NameNode:
>   - new datanode registered
>   - datanode has died
>   - exception caught
>   - etc?
> DataNode:
>   - startup
>   - initial registration with NN complete (this is important for HADOOP-4707 to sync up datanode.dnRegistration.name with the NN-side registration)
>   - namenode reconnect
>   - some block transfer hooks?
>   - exception caught
> I see two potential routes for implementation:
> 1) We make an enum for the types of hookpoints and have a general function in the ServicePlugin interface. Something like:
> {code:java}
> enum HookPoint {
>   DN_STARTUP,
>   DN_RECEIVED_NEW_BLOCK,
>   DN_CAUGHT_EXCEPTION,
>  ...
> }
> void runHook(HookPoint hp, Object value);
> {code}
> 2) We make classes specific to each "pluggable" as was originally suggested in HADDOP-5257. Something like:
> {code:java}
> class DataNodePlugin {
>   void datanodeStarted() {}
>   void receivedNewBlock(block info, etc) {}
>   void caughtException(Exception e) {}
>   ...
> }
> {code}
> I personally prefer option (2) since we can ensure plugin API compatibility at compile-time, and we avoid an ugly switch statement in a runHook() function.
> Interested to hear what people's thoughts are here.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


[jira] Updated: (HADOOP-5640) Allow ServicePlugins to hook callbacks into key service events

Posted by "Todd Lipcon (JIRA)" <ji...@apache.org>.
     [ https://issues.apache.org/jira/browse/HADOOP-5640?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Todd Lipcon updated HADOOP-5640:
--------------------------------

    Status: Open  (was: Patch Available)

Oops - realized there's a flaw in the previous patch that could make the test flaky on multicore boxes. Will resubmit when fixed.

> Allow ServicePlugins to hook callbacks into key service events
> --------------------------------------------------------------
>
>                 Key: HADOOP-5640
>                 URL: https://issues.apache.org/jira/browse/HADOOP-5640
>             Project: Hadoop Core
>          Issue Type: Improvement
>          Components: util
>            Reporter: Todd Lipcon
>            Assignee: Todd Lipcon
>         Attachments: hadoop-5640.txt
>
>
> HADOOP-5257 added the ability for NameNode and DataNode to start and stop ServicePlugin implementations at NN/DN start/stop. However, this is insufficient integration for some common use cases.
> We should add some functionality for Plugins to subscribe to events generated by the service they're plugging into. Some potential hook points are:
> NameNode:
>   - new datanode registered
>   - datanode has died
>   - exception caught
>   - etc?
> DataNode:
>   - startup
>   - initial registration with NN complete (this is important for HADOOP-4707 to sync up datanode.dnRegistration.name with the NN-side registration)
>   - namenode reconnect
>   - some block transfer hooks?
>   - exception caught
> I see two potential routes for implementation:
> 1) We make an enum for the types of hookpoints and have a general function in the ServicePlugin interface. Something like:
> {code:java}
> enum HookPoint {
>   DN_STARTUP,
>   DN_RECEIVED_NEW_BLOCK,
>   DN_CAUGHT_EXCEPTION,
>  ...
> }
> void runHook(HookPoint hp, Object value);
> {code}
> 2) We make classes specific to each "pluggable" as was originally suggested in HADDOP-5257. Something like:
> {code:java}
> class DataNodePlugin {
>   void datanodeStarted() {}
>   void receivedNewBlock(block info, etc) {}
>   void caughtException(Exception e) {}
>   ...
> }
> {code}
> I personally prefer option (2) since we can ensure plugin API compatibility at compile-time, and we avoid an ugly switch statement in a runHook() function.
> Interested to hear what people's thoughts are here.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


[jira] Commented: (HADOOP-5640) Allow ServicePlugins to hook callbacks into key service events

Posted by "dhruba borthakur (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/HADOOP-5640?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12715392#action_12715392 ] 

dhruba borthakur commented on HADOOP-5640:
------------------------------------------

I would like fsck to use this same plugin service and invoke a callback when it detects missing blocks for a file. Then I can setup such a plugin to automatically restore the corrupted file from an archival store (if such a store exists).

> Allow ServicePlugins to hook callbacks into key service events
> --------------------------------------------------------------
>
>                 Key: HADOOP-5640
>                 URL: https://issues.apache.org/jira/browse/HADOOP-5640
>             Project: Hadoop Core
>          Issue Type: Improvement
>          Components: util
>            Reporter: Todd Lipcon
>            Assignee: Todd Lipcon
>         Attachments: hadoop-5640.txt, HADOOP-5640.v2.txt, hadoop-5640.v3.txt
>
>
> HADOOP-5257 added the ability for NameNode and DataNode to start and stop ServicePlugin implementations at NN/DN start/stop. However, this is insufficient integration for some common use cases.
> We should add some functionality for Plugins to subscribe to events generated by the service they're plugging into. Some potential hook points are:
> NameNode:
>   - new datanode registered
>   - datanode has died
>   - exception caught
>   - etc?
> DataNode:
>   - startup
>   - initial registration with NN complete (this is important for HADOOP-4707 to sync up datanode.dnRegistration.name with the NN-side registration)
>   - namenode reconnect
>   - some block transfer hooks?
>   - exception caught
> I see two potential routes for implementation:
> 1) We make an enum for the types of hookpoints and have a general function in the ServicePlugin interface. Something like:
> {code:java}
> enum HookPoint {
>   DN_STARTUP,
>   DN_RECEIVED_NEW_BLOCK,
>   DN_CAUGHT_EXCEPTION,
>  ...
> }
> void runHook(HookPoint hp, Object value);
> {code}
> 2) We make classes specific to each "pluggable" as was originally suggested in HADDOP-5257. Something like:
> {code:java}
> class DataNodePlugin {
>   void datanodeStarted() {}
>   void receivedNewBlock(block info, etc) {}
>   void caughtException(Exception e) {}
>   ...
> }
> {code}
> I personally prefer option (2) since we can ensure plugin API compatibility at compile-time, and we avoid an ugly switch statement in a runHook() function.
> Interested to hear what people's thoughts are here.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


[jira] Commented: (HADOOP-5640) Allow ServicePlugins to hook callbacks into key service events

Posted by "Todd Lipcon (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/HADOOP-5640?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12701239#action_12701239 ] 

Todd Lipcon commented on HADOOP-5640:
-------------------------------------

Realized I didn't re-comment after uploading the new patch. The most recent patch (as QAed above) should be good to go.

> Allow ServicePlugins to hook callbacks into key service events
> --------------------------------------------------------------
>
>                 Key: HADOOP-5640
>                 URL: https://issues.apache.org/jira/browse/HADOOP-5640
>             Project: Hadoop Core
>          Issue Type: Improvement
>          Components: util
>            Reporter: Todd Lipcon
>            Assignee: Todd Lipcon
>         Attachments: hadoop-5640.txt, HADOOP-5640.v2.txt
>
>
> HADOOP-5257 added the ability for NameNode and DataNode to start and stop ServicePlugin implementations at NN/DN start/stop. However, this is insufficient integration for some common use cases.
> We should add some functionality for Plugins to subscribe to events generated by the service they're plugging into. Some potential hook points are:
> NameNode:
>   - new datanode registered
>   - datanode has died
>   - exception caught
>   - etc?
> DataNode:
>   - startup
>   - initial registration with NN complete (this is important for HADOOP-4707 to sync up datanode.dnRegistration.name with the NN-side registration)
>   - namenode reconnect
>   - some block transfer hooks?
>   - exception caught
> I see two potential routes for implementation:
> 1) We make an enum for the types of hookpoints and have a general function in the ServicePlugin interface. Something like:
> {code:java}
> enum HookPoint {
>   DN_STARTUP,
>   DN_RECEIVED_NEW_BLOCK,
>   DN_CAUGHT_EXCEPTION,
>  ...
> }
> void runHook(HookPoint hp, Object value);
> {code}
> 2) We make classes specific to each "pluggable" as was originally suggested in HADDOP-5257. Something like:
> {code:java}
> class DataNodePlugin {
>   void datanodeStarted() {}
>   void receivedNewBlock(block info, etc) {}
>   void caughtException(Exception e) {}
>   ...
> }
> {code}
> I personally prefer option (2) since we can ensure plugin API compatibility at compile-time, and we avoid an ugly switch statement in a runHook() function.
> Interested to hear what people's thoughts are here.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


[jira] Commented: (HADOOP-5640) Allow ServicePlugins to hook callbacks into key service events

Posted by "Todd Lipcon (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/HADOOP-5640?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12698516#action_12698516 ] 

Todd Lipcon commented on HADOOP-5640:
-------------------------------------

That's somewhat feasible, but lacking deep-copy support in a lot of the hadoop classes, it's a bit hard to ensure.

I guess we're probably best off just leaving this with a warning... the two warning options are:

a) Your plugin hooks are called in the same thread as the service you are plugging into. Ensure that you do not ever block in a plugin hook.
b) Your plugin hooks are called in a separate thread from the service you are plugging into. Ensure that you do are aware of any synchronization issues that might arise as a result.

Personally I prefer (a) still, since it's trivial to understand whether your code blocks, whereas it is somewhat more difficult to know whether you have a race. But, if others still prefer (b), that's fine.

> Allow ServicePlugins to hook callbacks into key service events
> --------------------------------------------------------------
>
>                 Key: HADOOP-5640
>                 URL: https://issues.apache.org/jira/browse/HADOOP-5640
>             Project: Hadoop Core
>          Issue Type: Improvement
>          Components: util
>            Reporter: Todd Lipcon
>
> HADOOP-5257 added the ability for NameNode and DataNode to start and stop ServicePlugin implementations at NN/DN start/stop. However, this is insufficient integration for some common use cases.
> We should add some functionality for Plugins to subscribe to events generated by the service they're plugging into. Some potential hook points are:
> NameNode:
>   - new datanode registered
>   - datanode has died
>   - exception caught
>   - etc?
> DataNode:
>   - startup
>   - initial registration with NN complete (this is important for HADOOP-4707 to sync up datanode.dnRegistration.name with the NN-side registration)
>   - namenode reconnect
>   - some block transfer hooks?
>   - exception caught
> I see two potential routes for implementation:
> 1) We make an enum for the types of hookpoints and have a general function in the ServicePlugin interface. Something like:
> {code:java}
> enum HookPoint {
>   DN_STARTUP,
>   DN_RECEIVED_NEW_BLOCK,
>   DN_CAUGHT_EXCEPTION,
>  ...
> }
> void runHook(HookPoint hp, Object value);
> {code}
> 2) We make classes specific to each "pluggable" as was originally suggested in HADDOP-5257. Something like:
> {code:java}
> class DataNodePlugin {
>   void datanodeStarted() {}
>   void receivedNewBlock(block info, etc) {}
>   void caughtException(Exception e) {}
>   ...
> }
> {code}
> I personally prefer option (2) since we can ensure plugin API compatibility at compile-time, and we avoid an ugly switch statement in a runHook() function.
> Interested to hear what people's thoughts are here.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


[jira] Updated: (HADOOP-5640) Allow ServicePlugins to hook callbacks into key service events

Posted by "Todd Lipcon (JIRA)" <ji...@apache.org>.
     [ https://issues.apache.org/jira/browse/HADOOP-5640?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Todd Lipcon updated HADOOP-5640:
--------------------------------

    Status: Patch Available  (was: Open)

Attaching a patch for the threaded version of plugin hook point dispatch. Test case included as well.

> Allow ServicePlugins to hook callbacks into key service events
> --------------------------------------------------------------
>
>                 Key: HADOOP-5640
>                 URL: https://issues.apache.org/jira/browse/HADOOP-5640
>             Project: Hadoop Core
>          Issue Type: Improvement
>          Components: util
>            Reporter: Todd Lipcon
>            Assignee: Todd Lipcon
>         Attachments: hadoop-5640.txt
>
>
> HADOOP-5257 added the ability for NameNode and DataNode to start and stop ServicePlugin implementations at NN/DN start/stop. However, this is insufficient integration for some common use cases.
> We should add some functionality for Plugins to subscribe to events generated by the service they're plugging into. Some potential hook points are:
> NameNode:
>   - new datanode registered
>   - datanode has died
>   - exception caught
>   - etc?
> DataNode:
>   - startup
>   - initial registration with NN complete (this is important for HADOOP-4707 to sync up datanode.dnRegistration.name with the NN-side registration)
>   - namenode reconnect
>   - some block transfer hooks?
>   - exception caught
> I see two potential routes for implementation:
> 1) We make an enum for the types of hookpoints and have a general function in the ServicePlugin interface. Something like:
> {code:java}
> enum HookPoint {
>   DN_STARTUP,
>   DN_RECEIVED_NEW_BLOCK,
>   DN_CAUGHT_EXCEPTION,
>  ...
> }
> void runHook(HookPoint hp, Object value);
> {code}
> 2) We make classes specific to each "pluggable" as was originally suggested in HADDOP-5257. Something like:
> {code:java}
> class DataNodePlugin {
>   void datanodeStarted() {}
>   void receivedNewBlock(block info, etc) {}
>   void caughtException(Exception e) {}
>   ...
> }
> {code}
> I personally prefer option (2) since we can ensure plugin API compatibility at compile-time, and we avoid an ugly switch statement in a runHook() function.
> Interested to hear what people's thoughts are here.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


[jira] Commented: (HADOOP-5640) Allow ServicePlugins to hook callbacks into key service events

Posted by "Todd Lipcon (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/HADOOP-5640?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12697234#action_12697234 ] 

Todd Lipcon commented on HADOOP-5640:
-------------------------------------

{quote}
Another approach would be to come up with a asynchronous publish/subscribe kind of model. The namenode/datanode could write data to this channel without waiting for the consumer(s) to pick it up. It could be similar to a file-change-log, but will also contain internal state changes of dfs modules. Thoughts?
{quote}

I like the idea of an async pub/sub model. A couple options for how we might implement this:

a) We have a single LinkedBlockingQueue attached to the service. When an event happens, the service enqueues an event. We have a PluginDispatcher instance which has a thread doing something like:

{code:java}
while (true) {
  Event e = queue.take();
  for (Plugin p : plugins) {
    p.handleEvent(e);
  }
}
{code}

events are dispatched from the service by just calling dispatcher.enqueueEvent(foo). We also delegate plugin registration/start/stop to PluginDispatcher

b) We have a LinkedBlockingQueue per plugin. Plugins are responsible for creating a thread which does a similar loop to above, trying to take events off the queue. Dispatch from the service then looks like:

{code:java}
for (Plugin p : plugins) {
  p.enqueueEvent(foo);
}
{code}


Both of the options above are a little ugly in that they require classes for each type of event that can be handled, and introduce a handleEvent(PluginEvent e) function in the plugins, likely with an ugly switch statement. With my functional programmer hat on, I'd personally prefer something like:

{code:java}
/* does this generic interface exist somewhere in hadoop yet? */
interface <T> SingleArgumentCaller {
  void call(T p);
}

/* in namenode: */

...
// we just heard about a new data node
dispatcher.enqueue(new SingleArgumentCaller<DatanodePlugin>() {
  void call(DatanodePlugin p) { p.newDatanodeAppeared(...); }
});
..

interface DatanodePlugin {
  void newDatanodeAppeared(...);
  /* all the other hook points */
}

/* dispatcher looks like: */

class <T> PluginDispatcher {
  private LinkedBlockingQueue<T> queue;
  private List<T> plugins;

  void run() {
    while (true) {
      SingleArgumentCaller caller = queue.take();
      for (T plugin : plugins) {
        caller.call(plugin); /* plus some try..catch */
      }
    }
  }
}
{code}

If no one has any strong objections, I'll go with the SingleArgumentCaller route, since I think class-per-event proliferation here is a messy solution.

> Allow ServicePlugins to hook callbacks into key service events
> --------------------------------------------------------------
>
>                 Key: HADOOP-5640
>                 URL: https://issues.apache.org/jira/browse/HADOOP-5640
>             Project: Hadoop Core
>          Issue Type: Improvement
>          Components: util
>            Reporter: Todd Lipcon
>
> HADOOP-5257 added the ability for NameNode and DataNode to start and stop ServicePlugin implementations at NN/DN start/stop. However, this is insufficient integration for some common use cases.
> We should add some functionality for Plugins to subscribe to events generated by the service they're plugging into. Some potential hook points are:
> NameNode:
>   - new datanode registered
>   - datanode has died
>   - exception caught
>   - etc?
> DataNode:
>   - startup
>   - initial registration with NN complete (this is important for HADOOP-4707 to sync up datanode.dnRegistration.name with the NN-side registration)
>   - namenode reconnect
>   - some block transfer hooks?
>   - exception caught
> I see two potential routes for implementation:
> 1) We make an enum for the types of hookpoints and have a general function in the ServicePlugin interface. Something like:
> {code:java}
> enum HookPoint {
>   DN_STARTUP,
>   DN_RECEIVED_NEW_BLOCK,
>   DN_CAUGHT_EXCEPTION,
>  ...
> }
> void runHook(HookPoint hp, Object value);
> {code}
> 2) We make classes specific to each "pluggable" as was originally suggested in HADDOP-5257. Something like:
> {code:java}
> class DataNodePlugin {
>   void datanodeStarted() {}
>   void receivedNewBlock(block info, etc) {}
>   void caughtException(Exception e) {}
>   ...
> }
> {code}
> I personally prefer option (2) since we can ensure plugin API compatibility at compile-time, and we avoid an ugly switch statement in a runHook() function.
> Interested to hear what people's thoughts are here.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


[jira] Updated: (HADOOP-5640) Allow ServicePlugins to hook callbacks into key service events

Posted by "Todd Lipcon (JIRA)" <ji...@apache.org>.
     [ https://issues.apache.org/jira/browse/HADOOP-5640?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Todd Lipcon updated HADOOP-5640:
--------------------------------

    Status: Patch Available  (was: Open)

> Allow ServicePlugins to hook callbacks into key service events
> --------------------------------------------------------------
>
>                 Key: HADOOP-5640
>                 URL: https://issues.apache.org/jira/browse/HADOOP-5640
>             Project: Hadoop Core
>          Issue Type: Improvement
>          Components: util
>            Reporter: Todd Lipcon
>            Assignee: Todd Lipcon
>         Attachments: hadoop-5640.txt, HADOOP-5640.v2.txt, hadoop-5640.v3.txt
>
>
> HADOOP-5257 added the ability for NameNode and DataNode to start and stop ServicePlugin implementations at NN/DN start/stop. However, this is insufficient integration for some common use cases.
> We should add some functionality for Plugins to subscribe to events generated by the service they're plugging into. Some potential hook points are:
> NameNode:
>   - new datanode registered
>   - datanode has died
>   - exception caught
>   - etc?
> DataNode:
>   - startup
>   - initial registration with NN complete (this is important for HADOOP-4707 to sync up datanode.dnRegistration.name with the NN-side registration)
>   - namenode reconnect
>   - some block transfer hooks?
>   - exception caught
> I see two potential routes for implementation:
> 1) We make an enum for the types of hookpoints and have a general function in the ServicePlugin interface. Something like:
> {code:java}
> enum HookPoint {
>   DN_STARTUP,
>   DN_RECEIVED_NEW_BLOCK,
>   DN_CAUGHT_EXCEPTION,
>  ...
> }
> void runHook(HookPoint hp, Object value);
> {code}
> 2) We make classes specific to each "pluggable" as was originally suggested in HADDOP-5257. Something like:
> {code:java}
> class DataNodePlugin {
>   void datanodeStarted() {}
>   void receivedNewBlock(block info, etc) {}
>   void caughtException(Exception e) {}
>   ...
> }
> {code}
> I personally prefer option (2) since we can ensure plugin API compatibility at compile-time, and we avoid an ugly switch statement in a runHook() function.
> Interested to hear what people's thoughts are here.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


[jira] Commented: (HADOOP-5640) Allow ServicePlugins to hook callbacks into key service events

Posted by "Todd Lipcon (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/HADOOP-5640?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12697062#action_12697062 ] 

Todd Lipcon commented on HADOOP-5640:
-------------------------------------

{quote}
Do datanodes get notified on that kind of events? That would be the way to solve our problem in HADOOP-4707, when the namenode is restarted.
{quote}

Yep, there's a perfect place to insert that, which was the other motivation I forgot to mention for this ticket :)

> Allow ServicePlugins to hook callbacks into key service events
> --------------------------------------------------------------
>
>                 Key: HADOOP-5640
>                 URL: https://issues.apache.org/jira/browse/HADOOP-5640
>             Project: Hadoop Core
>          Issue Type: Improvement
>          Components: util
>            Reporter: Todd Lipcon
>
> HADOOP-5257 added the ability for NameNode and DataNode to start and stop ServicePlugin implementations at NN/DN start/stop. However, this is insufficient integration for some common use cases.
> We should add some functionality for Plugins to subscribe to events generated by the service they're plugging into. Some potential hook points are:
> NameNode:
>   - new datanode registered
>   - datanode has died
>   - exception caught
>   - etc?
> DataNode:
>   - startup
>   - initial registration with NN complete (this is important for HADOOP-4707 to sync up datanode.dnRegistration.name with the NN-side registration)
>   - namenode reconnect
>   - some block transfer hooks?
>   - exception caught
> I see two potential routes for implementation:
> 1) We make an enum for the types of hookpoints and have a general function in the ServicePlugin interface. Something like:
> {code:java}
> enum HookPoint {
>   DN_STARTUP,
>   DN_RECEIVED_NEW_BLOCK,
>   DN_CAUGHT_EXCEPTION,
>  ...
> }
> void runHook(HookPoint hp, Object value);
> {code}
> 2) We make classes specific to each "pluggable" as was originally suggested in HADDOP-5257. Something like:
> {code:java}
> class DataNodePlugin {
>   void datanodeStarted() {}
>   void receivedNewBlock(block info, etc) {}
>   void caughtException(Exception e) {}
>   ...
> }
> {code}
> I personally prefer option (2) since we can ensure plugin API compatibility at compile-time, and we avoid an ugly switch statement in a runHook() function.
> Interested to hear what people's thoughts are here.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


[jira] Commented: (HADOOP-5640) Allow ServicePlugins to hook callbacks into key service events

Posted by "dhruba borthakur (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/HADOOP-5640?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12698322#action_12698322 ] 

dhruba borthakur commented on HADOOP-5640:
------------------------------------------

Another option would be to cop[y the associated data into the event data structure itself. In that case, the pluging does not have to ispect any namenode/datanode data structures, instead all the information that it neds is encapsulated in the event. Is this feasible?

> Allow ServicePlugins to hook callbacks into key service events
> --------------------------------------------------------------
>
>                 Key: HADOOP-5640
>                 URL: https://issues.apache.org/jira/browse/HADOOP-5640
>             Project: Hadoop Core
>          Issue Type: Improvement
>          Components: util
>            Reporter: Todd Lipcon
>
> HADOOP-5257 added the ability for NameNode and DataNode to start and stop ServicePlugin implementations at NN/DN start/stop. However, this is insufficient integration for some common use cases.
> We should add some functionality for Plugins to subscribe to events generated by the service they're plugging into. Some potential hook points are:
> NameNode:
>   - new datanode registered
>   - datanode has died
>   - exception caught
>   - etc?
> DataNode:
>   - startup
>   - initial registration with NN complete (this is important for HADOOP-4707 to sync up datanode.dnRegistration.name with the NN-side registration)
>   - namenode reconnect
>   - some block transfer hooks?
>   - exception caught
> I see two potential routes for implementation:
> 1) We make an enum for the types of hookpoints and have a general function in the ServicePlugin interface. Something like:
> {code:java}
> enum HookPoint {
>   DN_STARTUP,
>   DN_RECEIVED_NEW_BLOCK,
>   DN_CAUGHT_EXCEPTION,
>  ...
> }
> void runHook(HookPoint hp, Object value);
> {code}
> 2) We make classes specific to each "pluggable" as was originally suggested in HADDOP-5257. Something like:
> {code:java}
> class DataNodePlugin {
>   void datanodeStarted() {}
>   void receivedNewBlock(block info, etc) {}
>   void caughtException(Exception e) {}
>   ...
> }
> {code}
> I personally prefer option (2) since we can ensure plugin API compatibility at compile-time, and we avoid an ugly switch statement in a runHook() function.
> Interested to hear what people's thoughts are here.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.