You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-dev@hadoop.apache.org by "Enis Soztutar (JIRA)" <ji...@apache.org> on 2008/08/18 17:01:44 UTC
[jira] Issue Comment Edited: (HADOOP-2062) Standardize
long-running, daemon-like, threads in hadoop daemons
[ https://issues.apache.org/jira/browse/HADOOP-2062?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12623364#action_12623364 ]
enis edited comment on HADOOP-2062 at 8/18/08 7:59 AM:
----------------------------------------------------------------
Why don't we implement the child daemon Threads as Service subclasses. This way we can check for ping() and exceptions, no?
> Standardize long-running, daemon-like, threads in hadoop daemons
> ----------------------------------------------------------------
>
> Key: HADOOP-2062
> URL: https://issues.apache.org/jira/browse/HADOOP-2062
> Project: Hadoop Core
> Issue Type: Improvement
> Components: dfs, mapred
> Reporter: Arun C Murthy
> Assignee: Arun C Murthy
>
> There are several long-running, independent, threads in hadoop daemons (atleast in the JobTracker - e.g. ExpireLaunchingTasks, ExpireTrackers, TaskCommitQueue etc.) which need to be alive as long as the daemon itself and hence should be impervious to various errors and exceptions (e.g. HADOOP-2051).
> Currently, each of them seem to be hand-crafted (again, specifically the JobTracker) and different from the other.
> I propose we standardize on an implementation of a long-running, impervious, daemon-thread which can be used all over the shop. That thread should be explicitly shut-down by the hadoop daemon and shouldn't be vulnerable to any exceptions/errors.
> This mostly likely will look like this:
> {noformat}
> public abstract class DaemonThread extends Thread {
> public static final Log LOG = LogFactory.getLog(DaemonThread.class);
> {
> setDaemon(true); // always a daemon
> }
> public abstract void innerLoop() throws InterruptedException;
>
> public final void run() {
> while (!isInterrupted()) {
> try {
> innerLoop();
> } catch (InterruptedException ie) {
> LOG.warn(getName() + " interrupted, exiting...");
> } catch (Throwable t) {
> LOG.error(getName() + " got an exception: " +
> StringUtils.stringifyException(t));
> }
> }
> }
> }
> {noformat}
> In fact, we could probably hijack org.apache.hadoop.util.Daemon since it isn't used anywhere (Doug is it still used in nutch?) or atleast sub-class that.
> Thoughts? Could someone from hdfs/hbase chime in?
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.