You are viewing a plain text version of this content. The canonical link for it is here.
Posted to hdfs-issues@hadoop.apache.org by "ASF GitHub Bot (Jira)" <ji...@apache.org> on 2022/10/25 00:43:00 UTC

[jira] [Commented] (HDFS-16817) Remove useless DataNode lock related configuration

    [ https://issues.apache.org/jira/browse/HDFS-16817?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17623472#comment-17623472 ] 

ASF GitHub Bot commented on HDFS-16817:
---------------------------------------

haiyang1987 opened a new pull request, #5072:
URL: https://github.com/apache/hadoop/pull/5072

   
   ### Description of PR
   [HDFS-16817](https://issues.apache.org/jira/browse/HDFS-16817)
   Remove useless DataNode lock related configuration
   
   When look at the code related to DataNode lock, it is found that the relevant configuration are invalid and maybe can be removed
   
   ```
   public static final String DFS_DATANODE_LOCK_READ_WRITE_ENABLED_KEY =
   "dfs.datanode.lock.read.write.enabled";
   public static final Boolean DFS_DATANODE_LOCK_READ_WRITE_ENABLED_DEFAULT =
   true;
   public static final String  DFS_DATANODE_LOCK_REPORTING_THRESHOLD_MS_KEY =
   "dfs.datanode.lock-reporting-threshold-ms";
   public static final long
   DFS_DATANODE_LOCK_REPORTING_THRESHOLD_MS_DEFAULT = 300L;
   
   <property> 
   <name> dfs.datanode.lock.read.write.enabled </name> 
   <value> true </value> 
   <description> If this is true, the FsDataset lock will be a read write lock. If
   it is false, all locks will be a write lock.
   Enabling this should give better datanode throughput, as many read only
   functions can run concurrently under the read lock, when they would
   previously have required the exclusive write lock. As the feature is
   experimental, this switch can be used to disable the shared read lock, and
   cause all lock acquisitions to use the exclusive write lock.
   </description> 
   </property> 
   
   <property> 
   <name> dfs.datanode.lock-reporting-threshold-ms </name> 
   <value> 300 </value> 
   <description> When thread waits to obtain a lock, or a thread holds a lock for
   more than the threshold, a log message will be written. Note that
   dfs.lock.suppress.warning.interval ensures a single log message is
   emitted per interval for waiting threads and a single message for holding
   threads to avoid excessive logging.
   </description> 
   </property> 
   ```
   
   
   
   




> Remove useless DataNode lock related configuration
> --------------------------------------------------
>
>                 Key: HDFS-16817
>                 URL: https://issues.apache.org/jira/browse/HDFS-16817
>             Project: Hadoop HDFS
>          Issue Type: Improvement
>            Reporter: Haiyang Hu
>            Assignee: Haiyang Hu
>            Priority: Major
>
> When look at the code related to DataNode lock, it is found that the relevant configuration are invalid and maybe can be removed
> {code:java}
> public static final String DFS_DATANODE_LOCK_READ_WRITE_ENABLED_KEY =
> "dfs.datanode.lock.read.write.enabled";
> public static final Boolean DFS_DATANODE_LOCK_READ_WRITE_ENABLED_DEFAULT =
> true;
> public static final String  DFS_DATANODE_LOCK_REPORTING_THRESHOLD_MS_KEY =
> "dfs.datanode.lock-reporting-threshold-ms";
> public static final long
> DFS_DATANODE_LOCK_REPORTING_THRESHOLD_MS_DEFAULT = 300L;
> <property> 
> <name> dfs.datanode.lock.read.write.enabled </name> 
> <value> true </value> 
> <description> If this is true, the FsDataset lock will be a read write lock. If
> it is false, all locks will be a write lock.
> Enabling this should give better datanode throughput, as many read only
> functions can run concurrently under the read lock, when they would
> previously have required the exclusive write lock. As the feature is
> experimental, this switch can be used to disable the shared read lock, and
> cause all lock acquisitions to use the exclusive write lock.
> </description> 
> </property> 
> <property> 
> <name> dfs.datanode.lock-reporting-threshold-ms </name> 
> <value> 300 </value> 
> <description> When thread waits to obtain a lock, or a thread holds a lock for
> more than the threshold, a log message will be written. Note that
> dfs.lock.suppress.warning.interval ensures a single log message is
> emitted per interval for waiting threads and a single message for holding
> threads to avoid excessive logging.
> </description> 
> </property> 
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-help@hadoop.apache.org