You are viewing a plain text version of this content. The canonical link for it is here.
Posted to hdfs-issues@hadoop.apache.org by "ASF GitHub Bot (Jira)" <ji...@apache.org> on 2022/07/27 14:55:00 UTC

[jira] [Updated] (HDFS-16698) Add a metric to sense possible MaxDirectoryItemsExceededException in time.

     [ https://issues.apache.org/jira/browse/HDFS-16698?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

ASF GitHub Bot updated HDFS-16698:
----------------------------------
    Labels: pull-request-available  (was: )

> Add a metric to sense possible MaxDirectoryItemsExceededException in time.
> --------------------------------------------------------------------------
>
>                 Key: HDFS-16698
>                 URL: https://issues.apache.org/jira/browse/HDFS-16698
>             Project: Hadoop HDFS
>          Issue Type: Improvement
>            Reporter: ZanderXu
>            Assignee: ZanderXu
>            Priority: Major
>              Labels: pull-request-available
>          Time Spent: 10m
>  Remaining Estimate: 0h
>
> In our prod environment, we occasionally encounter MaxDirectoryItemsExceededException caused job failure.
> {code:java}
> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.protocol.FSLimitException$MaxDirectoryItemsExceededException): The directory item limit of /user/XXX/.sparkStaging is exceeded: limit=1048576 items=1048576
> {code}
> In order to avoid it, we add a metric to sense possible MaxDirectoryItemsExceededException in time. So that we can process it in time to avoid job failure.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-help@hadoop.apache.org