You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-dev@hadoop.apache.org by "Eli Collins (Resolved) (JIRA)" <ji...@apache.org> on 2012/03/29 18:38:28 UTC

[jira] [Resolved] (HADOOP-5640) Allow ServicePlugins to hook callbacks into key service events

     [ https://issues.apache.org/jira/browse/HADOOP-5640?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Eli Collins resolved HADOOP-5640.
---------------------------------

    Resolution: Won't Fix
    
> Allow ServicePlugins to hook callbacks into key service events
> --------------------------------------------------------------
>
>                 Key: HADOOP-5640
>                 URL: https://issues.apache.org/jira/browse/HADOOP-5640
>             Project: Hadoop Common
>          Issue Type: Improvement
>          Components: util
>            Reporter: Todd Lipcon
>            Assignee: Todd Lipcon
>         Attachments: 0093-HADOOP-5640-Add-dispatch-mechanism-for-services-to.patch, 0149-HADOOP-5640-puts-this-in-the-new-test-dir.-It-needs.patch, HADOOP-5640.v2.txt, hadoop-5640.txt, hadoop-5640.txt, hadoop-5640.v3.txt
>
>
> HADOOP-5257 added the ability for NameNode and DataNode to start and stop ServicePlugin implementations at NN/DN start/stop. However, this is insufficient integration for some common use cases.
> We should add some functionality for Plugins to subscribe to events generated by the service they're plugging into. Some potential hook points are:
> NameNode:
>   - new datanode registered
>   - datanode has died
>   - exception caught
>   - etc?
> DataNode:
>   - startup
>   - initial registration with NN complete (this is important for HADOOP-4707 to sync up datanode.dnRegistration.name with the NN-side registration)
>   - namenode reconnect
>   - some block transfer hooks?
>   - exception caught
> I see two potential routes for implementation:
> 1) We make an enum for the types of hookpoints and have a general function in the ServicePlugin interface. Something like:
> {code:java}
> enum HookPoint {
>   DN_STARTUP,
>   DN_RECEIVED_NEW_BLOCK,
>   DN_CAUGHT_EXCEPTION,
>  ...
> }
> void runHook(HookPoint hp, Object value);
> {code}
> 2) We make classes specific to each "pluggable" as was originally suggested in HADDOP-5257. Something like:
> {code:java}
> class DataNodePlugin {
>   void datanodeStarted() {}
>   void receivedNewBlock(block info, etc) {}
>   void caughtException(Exception e) {}
>   ...
> }
> {code}
> I personally prefer option (2) since we can ensure plugin API compatibility at compile-time, and we avoid an ugly switch statement in a runHook() function.
> Interested to hear what people's thoughts are here.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira