You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-issues@hadoop.apache.org by "Kihwal Lee (JIRA)" <ji...@apache.org> on 2018/05/04 18:00:00 UTC

[jira] [Commented] (HADOOP-13738) DiskChecker should perform some disk IO

    [ https://issues.apache.org/jira/browse/HADOOP-13738?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16464212#comment-16464212 ] 

Kihwal Lee commented on HADOOP-13738:
-------------------------------------

We are seeing issues in 2.8 with this change.
- When space is low, the os returns ENOSPC. Instead simply stop writing, the drive is marked bad and replication happens. This make cluster-wide space problem worse. If the number of "failed" drives exceeds the DFIP limit, the datanode shuts down.
- There are non-hdfs users of DiskChecker, who use it proactively, not just on failures. This was fine before, but now it incurs heavy I/O due to introduction of fsync() in the code.

> DiskChecker should perform some disk IO
> ---------------------------------------
>
>                 Key: HADOOP-13738
>                 URL: https://issues.apache.org/jira/browse/HADOOP-13738
>             Project: Hadoop Common
>          Issue Type: Improvement
>            Reporter: Arpit Agarwal
>            Assignee: Arpit Agarwal
>            Priority: Major
>             Fix For: 2.9.0, 3.0.0-alpha2, 2.8.4
>
>         Attachments: HADOOP-13738-branch-2.8-06.patch, HADOOP-13738.01.patch, HADOOP-13738.02.patch, HADOOP-13738.03.patch, HADOOP-13738.04.patch, HADOOP-13738.05.patch
>
>
> DiskChecker can fail to detect total disk/controller failures indefinitely. We have seen this in real clusters. DiskChecker performs simple permissions-based checks on directories which do not guarantee that any disk IO will be attempted.
> A simple improvement is to write some data and flush it to the disk.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org