You are viewing a plain text version of this content. The canonical link for it is here.
Posted to hdfs-dev@hadoop.apache.org by "Ryan Wu (Jira)" <ji...@apache.org> on 2019/11/13 12:03:00 UTC

[jira] [Created] (HDFS-14986) ReplicaCachingGetSpaceUsed throws ConcurrentModificationException

Ryan Wu created HDFS-14986:
------------------------------

             Summary: ReplicaCachingGetSpaceUsed throws  ConcurrentModificationException
                 Key: HDFS-14986
                 URL: https://issues.apache.org/jira/browse/HDFS-14986
             Project: Hadoop HDFS
          Issue Type: Improvement
            Reporter: Ryan Wu
            Assignee: Ryan Wu


Running DU across lots of disks is very expensive . We applied the patch HDFS-14313 to get  used space from ReplicaInfo in memory.However, new du threads throw the exception
{code:java}
// 2019-11-08 18:07:13,858 ERROR [refreshUsed-/home/vipshop/hard_disk/7/dfs/dn/current/BP-1203969992-10.208.50.21-1450855658517] org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.ReplicaCachingGetSpaceUsed: ReplicaCachingGetSpaceUsed refresh errorjava.util.ConcurrentModificationException: Tree has been modified outside of iterator    at org.apache.hadoop.hdfs.util.FoldedTreeSet$TreeSetIterator.checkForModification(FoldedTreeSet.java:311)    at org.apache.hadoop.hdfs.util.FoldedTreeSet$TreeSetIterator.hasNext(FoldedTreeSet.java:256)    at java.util.AbstractCollection.addAll(AbstractCollection.java:343)    at java.util.HashSet.<init>(HashSet.java:120)    at org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.deepCopyReplica(FsDatasetImpl.java:1052)    at org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.ReplicaCachingGetSpaceUsed.refresh(ReplicaCachingGetSpaceUsed.java:73)    at org.apache.hadoop.fs.CachingGetSpaceUsed$RefreshThread.run(CachingGetSpaceUsed.java:178)    at java.lang.Thread.run(Thread.java:748)
{code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-dev-unsubscribe@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-help@hadoop.apache.org