You are viewing a plain text version of this content. The canonical link for it is here.
Posted to hdfs-dev@hadoop.apache.org by "Takanobu Asanuma (Jira)" <ji...@apache.org> on 2023/01/23 09:32:00 UTC
[jira] [Resolved] (HDFS-16876) Garbage collect map entries in shared RouterStateIdContext using information from namenodeResolver instead of the map of active connectionPools.
[ https://issues.apache.org/jira/browse/HDFS-16876?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Takanobu Asanuma resolved HDFS-16876.
-------------------------------------
Fix Version/s: 3.4.0
Resolution: Fixed
> Garbage collect map entries in shared RouterStateIdContext using information from namenodeResolver instead of the map of active connectionPools.
> ------------------------------------------------------------------------------------------------------------------------------------------------
>
> Key: HDFS-16876
> URL: https://issues.apache.org/jira/browse/HDFS-16876
> Project: Hadoop HDFS
> Issue Type: Bug
> Components: rbf
> Reporter: Simbarashe Dzinamarira
> Assignee: Simbarashe Dzinamarira
> Priority: Critical
> Labels: pull-request-available
> Fix For: 3.4.0
>
>
> An element in RouterStateIdContext#namespaceIdMap is deleted when there is no connectionPool referencing the namespace. This is done by a thread in ConnectionManager that cleans up stale connectionPools. I propose a less aggressive approach, that is, cleaning up an entry when the router cannot resolve a namenode belonging to the namespace.
> Some benefits of this approach are:
> * Even when there are no active connections, the router still tracks a recent state of the namenode. This will be beneficial for debugging.
> * Simpler lifecycle for the map entries. The entries are long-lived.
> * Few operations under the writeLock in ConnectionManager.
--
This message was sent by Atlassian Jira
(v8.20.10#820010)
---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-dev-unsubscribe@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-help@hadoop.apache.org