You are viewing a plain text version of this content. The canonical link for it is here.
Posted to hdfs-dev@hadoop.apache.org by "Walter Su (JIRA)" <ji...@apache.org> on 2015/11/16 13:15:11 UTC

[jira] [Resolved] (HDFS-8770) ReplicationMonitor thread received Runtime exception: NullPointerException when BlockManager.chooseExcessReplicates

     [ https://issues.apache.org/jira/browse/HDFS-8770?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Walter Su resolved HDFS-8770.
-----------------------------
    Resolution: Duplicate

HDFS-9313 probably fixed this as a workaround. And HDFS-9314 is filed to improve this.

Closed as duplicated. Be free to reopen if you disagree.

> ReplicationMonitor thread received Runtime exception: NullPointerException when BlockManager.chooseExcessReplicates
> -------------------------------------------------------------------------------------------------------------------
>
>                 Key: HDFS-8770
>                 URL: https://issues.apache.org/jira/browse/HDFS-8770
>             Project: Hadoop HDFS
>          Issue Type: Bug
>          Components: namenode
>    Affects Versions: 2.6.0, 2.7.0
>            Reporter: ade
>            Assignee: ade
>            Priority: Critical
>         Attachments: HDFS-8770_v1.patch
>
>
> Namenode shutdown when ReplicationMonitor thread received Runtime exception:
> {quote}
> 2015-07-08 16:43:55,167 ERROR org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: ReplicationMonitor thread received Runtime exception.
> java.lang.NullPointerException
>         at org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy.adjustSetsWithChosenReplica(BlockPlacementPolicy.java:189)
>         at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseExcessReplicates(BlockManager.java:2911)
>         at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.processOverReplicatedBlock(BlockManager.java:2849)
>         at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.processMisReplicatedBlock(BlockManager.java:2780)
>         at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.rescanPostponedMisreplicatedBlocks(BlockManager.java:1931)
>         at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor.run(BlockManager.java:3628)
>         at java.lang.Thread.run(Thread.java:744)
> {quote}
> We use hadoop-2.6.0 configured with heterogeneous storages and setStoragePolicy some path One_SSD. When a block has excess replicated like 2 SSD replica on different rack(exactlyOne set) and 2 Disk on same rack(moreThanOne set), BlockPlacementPolicyDefault.chooseReplicaToDelete return null because only moreThanOne set be chosen to find SSD replica



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)