You are viewing a plain text version of this content. The canonical link for it is here.
Posted to hdfs-dev@hadoop.apache.org by "Jingxuan Fu (Jira)" <ji...@apache.org> on 2022/07/06 09:13:00 UTC

[jira] [Created] (HDFS-16653) Safe mode related operations cannot be performed when “dfs.client.mmap.cache.size” is set to a negative number

Jingxuan Fu created HDFS-16653:
----------------------------------

             Summary: Safe mode related operations cannot be performed when “dfs.client.mmap.cache.size” is set to a negative number
                 Key: HDFS-16653
                 URL: https://issues.apache.org/jira/browse/HDFS-16653
             Project: Hadoop HDFS
          Issue Type: Bug
    Affects Versions: 3.1.3
         Environment: Linux version 4.15.0-142-generic (buildd@lgw01-amd64-039) (gcc version 5.4.0 20160609 (Ubuntu 5.4.0-6ubuntu1~16.04.12))
            Reporter: Jingxuan Fu
            Assignee: Jingxuan Fu


 
{code:java}
<property>
  <name>dfs.client.mmap.cache.size</name>
  <value>256</value>
  <description>
    When zero-copy reads are used, the DFSClient keeps a cache of recently used
    memory mapped regions.  This parameter controls the maximum number of
    entries that we will keep in that cache.
    The larger this number is, the more file descriptors we will potentially
    use for memory-mapped files.  mmaped files also use virtual address space.
    You may need to increase your ulimit virtual address space limits before
    increasing the client mmap cache size.
    
    Note that you can still do zero-copy reads when this size is set to 0.
  </description>
</property>
{code}
When “dfs.client.mmap.cache.size” is set to a negative number, it causes the manual exit from safe mode using the hdfs dfsadmin -safemode leave command in /hadoop//bin to be invalid, and the other operations provided by the validation on safe mode cannot be performed, and the terminal returns safe mode as empty, with no exceptions thrown.
{code:java}
hadoop@ljq1:~/hadoop-3.1.3-work/etc/hadoop$ hdfs dfsadmin -safemode leave
safemode: null
Usage: hdfs dfsadmin [-safemode enter | leave | get | wait | forceExit] {code}
But verify that at this time hdfs is still in safe mode, and the DataNode in the startup to the namenode report the state of the block and other security standards, after a certain period of time automatically leave safe mode.
{code:java}
hadoop@ljq1:~/hdfsapi$ ./test_hdfsapi.sh 
SLF4J: Failed to load class "org.slf4j.impl.StaticLoggerBinder".
SLF4J: Defaulting to no-operation (NOP) logger implementation
SLF4J: See http://www.slf4j.org/codes.html#StaticLoggerBinder for further details.
log4j:WARN No appenders could be found for logger (org.apache.htrace.core.Tracer).
log4j:WARN Please initialize the log4j system properly.
log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info.
org.apache.hadoop.hdfs.server.namenode.SafeModeException: Cannot create directory /hdfsapi/test. Name node is in safe mode.
The reported blocks 5 has reached the threshold 0.9990 of total blocks 5. The minimum number of live datanodes is not required. In safe mode extension. Safe mode will be turned off automatically in 9 seconds. {code}
Therefore, it is considered necessary to refine the impact of this configuration item on the operation related to safe mode.

 

 

 

 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-dev-unsubscribe@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-help@hadoop.apache.org