You are viewing a plain text version of this content. The canonical link for it is here.
Posted to hdfs-dev@hadoop.apache.org by "Ayush Saxena (JIRA)" <ji...@apache.org> on 2019/05/29 18:07:00 UTC

[jira] [Resolved] (HDFS-14468) StorageLocationChecker methods throw DiskErrorExceptions when the configuration has wrong values

     [ https://issues.apache.org/jira/browse/HDFS-14468?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Ayush Saxena resolved HDFS-14468.
---------------------------------
    Resolution: Duplicate

> StorageLocationChecker methods throw DiskErrorExceptions when the configuration has wrong values
> ------------------------------------------------------------------------------------------------
>
>                 Key: HDFS-14468
>                 URL: https://issues.apache.org/jira/browse/HDFS-14468
>             Project: Hadoop HDFS
>          Issue Type: Sub-task
>            Reporter: eBugs
>            Priority: Minor
>
> Dear HDFS developers, we are developing a tool to detect exception-related bugs in Java. Our prototype has spotted the following three {{throw}} statements whose exception class and error message indicate different error conditions.
>  
> Version: Hadoop-3.1.2
> File: HADOOP-ROOT/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/checker/StorageLocationChecker.java
> Line: 96-98, 110-113, and 173-176
> {code:java}
> throw new DiskErrorException("Invalid value configured for "
>     + DFS_DATANODE_DISK_CHECK_TIMEOUT_KEY + " - "
>     + maxAllowedTimeForCheckMs + " (should be > 0)");{code}
> {code:java}
> throw new DiskErrorException("Invalid value configured for "
>     + DFS_DATANODE_FAILED_VOLUMES_TOLERATED_KEY + " - "
>     + maxVolumeFailuresTolerated + " "
>     + DataNode.MAX_VOLUME_FAILURES_TOLERATED_MSG);{code}
> {code:java}
> throw new DiskErrorException("Invalid value configured for "
>     + DFS_DATANODE_FAILED_VOLUMES_TOLERATED_KEY + " - "
>     + maxVolumeFailuresTolerated + ". Value configured is >= "
>     + "to the number of configured volumes (" + dataDirs.size() + ").");{code}
>  
> A {{DiskErrorException}} means an error has occurred when the process is interacting with the disk, e.g., in {{org.apache.hadoop.util.DiskChecker.checkDirInternal()}} we have the following code (lines 97-98):
> {code:java}
> throw new DiskErrorException("Cannot create directory: " + dir.toString());{code}
> However, the error messages of the first three exceptions indicate that the {{StorageLocationChecker}} is configured incorrectly, which means there is nothing wrong with the disk (yet). This mismatch could be a problem. For example, the callers trying to handle other {{DiskErrorException}} may accidentally (and incorrectly) handle the configuration error.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-dev-unsubscribe@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-help@hadoop.apache.org