You are viewing a plain text version of this content. The canonical link for it is here.
Posted to hdfs-dev@hadoop.apache.org by "Karthik Palanisamy (JIRA)" <ji...@apache.org> on 2018/12/21 02:40:00 UTC

[jira] [Created] (HDFS-14164) Namenode should not be started if safemode threshold is out of boundary

Karthik Palanisamy created HDFS-14164:
-----------------------------------------

             Summary: Namenode should not be started if safemode threshold is out of boundary
                 Key: HDFS-14164
                 URL: https://issues.apache.org/jira/browse/HDFS-14164
             Project: Hadoop HDFS
          Issue Type: Bug
          Components: namenode
    Affects Versions: 3.1.1, 2.7.3
         Environment: #apache hadoop-3.1.1
            Reporter: Karthik Palanisamy
            Assignee: Karthik Palanisamy


Mistakenly,  User has configured safemode threshold(dfs.namenode.safemode.threshold-pct) to 090 instead of 0.90. Due to this change,  UI has the incorrect summary and it never turns to be out of safemode until manual intervention. Because total block count will never match with an additional block count, which is to be reported.

 

i.e

Wrong setting: dfs.namenode.safemode.threshold-pct=090

Summary:

Safe mode is ON. The reported blocks 0 needs additional 360 blocks to reach the threshold 90.0000 of total blocks 4. The number of live datanodes 3 has reached the minimum number 0. Safe mode will be turned off automatically once the thresholds have been reached.

10 files and directories, 4 blocks (4 replicated blocks, 0 erasure coded block groups) = 14 total filesystem object(s).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-dev-unsubscribe@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-help@hadoop.apache.org