You are viewing a plain text version of this content. The canonical link for it is here.
Posted to hdfs-dev@hadoop.apache.org by "JiraUser (Jira)" <ji...@apache.org> on 2020/06/25 16:38:00 UTC

[jira] [Created] (HDFS-15438) dfs.disk.balancer.max.disk.errors = 0 will fail the block copy

JiraUser created HDFS-15438:
-------------------------------

             Summary: dfs.disk.balancer.max.disk.errors = 0 will fail the block copy
                 Key: HDFS-15438
                 URL: https://issues.apache.org/jira/browse/HDFS-15438
             Project: Hadoop HDFS
          Issue Type: Bug
          Components: balancer &amp; mover
            Reporter: JiraUser


In HDFS disk balancer, the config parameter "dfs.disk.balancer.max.disk.errors" is to control the value of maximum number of errors we can ignore for a specific move between two disks before it is abandoned.

From the checking code, the parameter can accept value that >= 0. And setting the value to 0 should mean no error tolerance. However, in current code implementation in DiskBalancer.java. Setting the value to 0 will simple don't do the block copy.


{code:java}
// Gets the next block that we can copy
private ExtendedBlock getBlockToCopy(FsVolumeSpi.BlockIterator iter,
                                         DiskBalancerWorkItem item) {
      while (!iter.atEnd() && item.getErrorCount() < getMaxError(item)) { //getMaxError = 0
        try {
          ... //get the block
        }  catch (IOException e) {
            item.incErrorCount();
        }
       if (item.getErrorCount() >= getMaxError(item)) {
        item.setErrMsg("Error count exceeded.");
        LOG.info("Maximum error count exceeded. Error count: {} Max error:{} ",
            item.getErrorCount(), item.getMaxDiskErrors());
      }
{code}

*How to fix*

Change the while loop condition and the following if statement condition to support value 0.
 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-dev-unsubscribe@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-help@hadoop.apache.org