You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@trafodion.apache.org by "ASF GitHub Bot (JIRA)" <ji...@apache.org> on 2016/10/25 22:08:58 UTC
[jira] [Commented] (TRAFODION-2310) DTM Lead Logic on very busy
system resulted in trafodion crash
[ https://issues.apache.org/jira/browse/TRAFODION-2310?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15606619#comment-15606619 ]
ASF GitHub Bot commented on TRAFODION-2310:
-------------------------------------------
GitHub user zcorrea opened a pull request:
https://github.com/apache/incubator-trafodion/pull/782
[TRAFODION-2310] Changed soft down node processing to propagate node …
…state change to remote monitor prior to killing processes.
You can merge this pull request into a Git repository by running:
$ git pull https://github.com/zcorrea/incubator-trafodion TRAFODION-2310
Alternatively you can review and apply these changes as the patch at:
https://github.com/apache/incubator-trafodion/pull/782.patch
To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:
This closes #782
----
----
> DTM Lead Logic on very busy system resulted in trafodion crash
> ---------------------------------------------------------------
>
> Key: TRAFODION-2310
> URL: https://issues.apache.org/jira/browse/TRAFODION-2310
> Project: Apache Trafodion
> Issue Type: Bug
> Components: foundation
> Affects Versions: 2.1-incubating
> Reporter: Gonzalo E Correa
> Assignee: Gonzalo E Correa
> Fix For: 2.1-incubating
>
> Original Estimate: 48h
> Remaining Estimate: 48h
>
> The root cause of this problem is that the monitor in node 0 was starved out of CPU cycles and the watchdog timer expired. Consequently, the node was brought down by the SQWatchdog process.
>
> This caused a sequence events where the TM leader was still in node 0 as far as all the remote monitors were concerned, but the $TM0 process no longer existed. The TM processes on the other nodes got the death message before the node was marked down which caused them to send a TM Leader request to their local monitor which checks to make sure the process exists, if not it aborts (this really should re-drive the selection of a new TM leader). However, the node down processing is what currently select a new TM leader, but there should be logic that reassigns a new TM leader when the leader dies. This is a bug that needs fixing.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)