You are viewing a plain text version of this content. The canonical link for it is here.
Posted to hdfs-dev@hadoop.apache.org by "tomscut (Jira)" <ji...@apache.org> on 2021/08/19 03:50:00 UTC
[jira] [Created] (HDFS-16179) Update loglevel for
BlockManager#chooseExcessRedundancyStriped avoid too much logs
tomscut created HDFS-16179:
------------------------------
Summary: Update loglevel for BlockManager#chooseExcessRedundancyStriped avoid too much logs
Key: HDFS-16179
URL: https://issues.apache.org/jira/browse/HDFS-16179
Project: Hadoop HDFS
Issue Type: Improvement
Affects Versions: 3.1.0
Reporter: tomscut
Assignee: tomscut
Attachments: log-count.jpg, logs.jpg
{code:java}
private void chooseExcessRedundancyStriped(BlockCollection bc,
final Collection<DatanodeStorageInfo> nonExcess,
BlockInfo storedBlock,
DatanodeDescriptor delNodeHint) {
...
// cardinality of found indicates the expected number of internal blocks
final int numOfTarget = found.cardinality();
final BlockStoragePolicy storagePolicy = storagePolicySuite.getPolicy(
bc.getStoragePolicyID());
final List<StorageType> excessTypes = storagePolicy.chooseExcess(
(short) numOfTarget, DatanodeStorageInfo.toStorageTypes(nonExcess));
if (excessTypes.isEmpty()) {
LOG.warn("excess types chosen for block {} among storages {} is empty",
storedBlock, nonExcess);
return;
}
...
}
{code}
IMO, here is just detecting excess StorageType and setting the log level to debug has no effect.
We have a cluster that uses the EC policy to store data. The current log level is WARN here, and in about 50 minutes, 286,093 logs are printed, which can cause other important logs to drown out.
!logs.jpg|width=1167,height=62!
!log-count.jpg|width=760,height=30!
--
This message was sent by Atlassian Jira
(v8.3.4#803005)
---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-dev-unsubscribe@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-help@hadoop.apache.org