You are viewing a plain text version of this content. The canonical link for it is here.
Posted to hdfs-issues@hadoop.apache.org by "ASF GitHub Bot (Jira)" <ji...@apache.org> on 2023/02/06 03:34:00 UTC
[jira] [Commented] (HDFS-16909) Make judging null statment out from for loop in ReplicaMap#mergeAll method.
[ https://issues.apache.org/jira/browse/HDFS-16909?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17684404#comment-17684404 ]
ASF GitHub Bot commented on HDFS-16909:
---------------------------------------
hfutatzhanghb opened a new pull request, #5353:
URL: https://github.com/apache/hadoop/pull/5353
Currently, the code is as below:
```java
for (ReplicaInfo replicaInfo : replicaSet) {
checkBlock(replicaInfo);
if (curSet == null) {
// Add an entry for block pool if it does not exist already
curSet = new LightWeightResizableGSet<>();
map.put(bp, curSet);
}
curSet.put(replicaInfo);
}
```
the statment :
```java
if(curSet == null)
```
should be moved to outside from the for loop.
> Make judging null statment out from for loop in ReplicaMap#mergeAll method.
> ---------------------------------------------------------------------------
>
> Key: HDFS-16909
> URL: https://issues.apache.org/jira/browse/HDFS-16909
> Project: Hadoop HDFS
> Issue Type: Improvement
> Components: datanode
> Affects Versions: 3.3.4
> Reporter: ZhangHB
> Priority: Minor
>
> Currently, the code is as below:
> {code:java}
> for (ReplicaInfo replicaInfo : replicaSet) {
> checkBlock(replicaInfo);
> if (curSet == null) {
> // Add an entry for block pool if it does not exist already
> curSet = new LightWeightResizableGSet<>();
> map.put(bp, curSet);
> }
> curSet.put(replicaInfo);
> } {code}
> the statment :
> {code:java}
> if(curSet == null){code}
> should be moved to outside from the for loop.
--
This message was sent by Atlassian Jira
(v8.20.10#820010)
---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-help@hadoop.apache.org