You are viewing a plain text version of this content. The canonical link for it is here.
Posted to hdfs-dev@hadoop.apache.org by "eBugs in Cloud Systems (JIRA)" <ji...@apache.org> on 2019/05/06 14:01:00 UTC

[jira] [Created] (HDFS-14470) DataNode.startDataNode() throws a DiskErrorException when the configuration has wrong values

eBugs in Cloud Systems created HDFS-14470:
---------------------------------------------

             Summary: DataNode.startDataNode() throws a DiskErrorException when the configuration has wrong values
                 Key: HDFS-14470
                 URL: https://issues.apache.org/jira/browse/HDFS-14470
             Project: Hadoop HDFS
          Issue Type: Bug
            Reporter: eBugs in Cloud Systems


Dear HDFS developers, we are developing a tool to detect exception-related bugs in Java. Our prototype has spotted the following {{throw}} statement whose exception class and error message seem to indicate different error conditions. Since we are not very familiar with HDFS's internal work flow, could you please help us verify if this is a bug, i.e., will the callers have trouble handling the exception, and will the users/admins have trouble diagnosing the failure?

 

Version: Hadoop-3.1.2

File: HADOOP-ROOT/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java

Line: 1407-1410
{code:java}
throw new DiskErrorException("Invalid value configured for "
    + "dfs.datanode.failed.volumes.tolerated - " + volFailuresTolerated
    + ". Value configured is either less than -1 or >= "
    + "to the number of configured volumes (" + volsConfigured + ").");{code}
Reason: A {{DiskErrorException}} means an error has occurred when the process is interacting with the disk, e.g., in {{org.apache.hadoop.util.DiskChecker.checkDirInternal()}} we have the following code (lines 97-98):
{code:java}
throw new DiskErrorException("Cannot create directory: " + dir.toString());{code}
However, the error message of the first exception indicates that {{dfs.datanode.failed.volumes.tolerated}} is configured incorrectly, which means there is nothing wrong with the disk (yet). Will this mismatch be a problem? For example, will the callers try to handle other {{DiskErrorException}} accidentally (and incorrectly) handle the configuration error?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-dev-unsubscribe@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-help@hadoop.apache.org