You are viewing a plain text version of this content. The canonical link for it is here.
Posted to hdfs-issues@hadoop.apache.org by "ShuangQi Xia (Jira)" <ji...@apache.org> on 2022/07/29 10:10:00 UTC
[jira] [Updated] (HDFS-16699) Router Update Observer NameNode state to Active when failover because of sockeTimeOut Exception
[ https://issues.apache.org/jira/browse/HDFS-16699?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
ShuangQi Xia updated HDFS-16699:
--------------------------------
Summary: Router Update Observer NameNode state to Active when failover because of sockeTimeOut Exception (was: Router Update Observer NameNode state to Active when failover because of failover Exception)
> Router Update Observer NameNode state to Active when failover because of sockeTimeOut Exception
> -------------------------------------------------------------------------------------------------
>
> Key: HDFS-16699
> URL: https://issues.apache.org/jira/browse/HDFS-16699
> Project: Hadoop HDFS
> Issue Type: Bug
> Components: rbf
> Affects Versions: 3.1.1
> Reporter: ShuangQi Xia
> Priority: Major
>
> we found that ,router print logs that indicate Observer NameNode state changed to active always,here's the log
> 2022-03-18 11:00:54,589 | INFO | NamenodeHeartbeatService hacluster 11342-0 | NN registration state has changed: test101:25019->hacluster:11342:test103:25000-ACTIVE -> test102:25019->hacluster:11342::test103:25000-OBSERVER | MembershipStoreImpl.java:170
> for code ,I fount that , when router request failed for some reson ,like sockettimeout Excetion , failover to Observer NameNode,will update it's state to Active
> if (failover) {
> // Success on alternate server, update
> InetSocketAddress address = client.getAddress();
> namenodeResolver.updateActiveNamenode(nsId, address);
> }
--
This message was sent by Atlassian Jira
(v8.20.10#820010)
---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-help@hadoop.apache.org