You are viewing a plain text version of this content. The canonical link for it is here.
Posted to hdfs-dev@hadoop.apache.org by "dragon (JIRA)" <ji...@apache.org> on 2016/03/15 09:16:36 UTC

[jira] [Created] (HDFS-10099) CLONE - Erasure Coding: Fix the NullPointerException when deleting file

dragon created HDFS-10099:
-----------------------------

             Summary: CLONE - Erasure Coding: Fix the NullPointerException when deleting file
                 Key: HDFS-10099
                 URL: https://issues.apache.org/jira/browse/HDFS-10099
             Project: Hadoop HDFS
          Issue Type: Sub-task
            Reporter: dragon
            Assignee: Yi Liu
             Fix For: HDFS-7285


In HDFS, when removing some file, NN will also remove all its blocks from {{BlocksMap}}, and send {{DNA_INVALIDATE}} (invalidate blocks) commands to datanodes.  After datanodes successfully delete the block replicas, will report {{DELETED_BLOCK}} to NameNode.

snippet code logic in {{BlockManager#processIncrementalBlockReport}} is as following
{code}
case DELETED_BLOCK:
        removeStoredBlock(storageInfo, getStoredBlock(rdbi.getBlock()), node);
        ...
{code}
{code}
private void removeStoredBlock(DatanodeStorageInfo storageInfo, Block block,
      DatanodeDescriptor node) {
    if (shouldPostponeBlocksFromFuture &&
        namesystem.isGenStampInFuture(block)) {
      queueReportedBlock(storageInfo, block, null,
          QUEUE_REASON_FUTURE_GENSTAMP);
      return;
    }
    removeStoredBlock(getStoredBlock(block), node);
  }
{code}

In EC branch, we add {{getStoredBlock}}. There is {{NullPointerException}} when handling {{DELETED_BLOCK}} of incrementalBlockReport from DataNode after delete a file, since the block is already removed, we need to check.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)