You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@cassandra.apache.org by "Joshua McKenzie (JIRA)" <ji...@apache.org> on 2016/01/04 18:23:39 UTC

[jira] [Commented] (CASSANDRA-10957) Verify disk is readable on FileNotFound Exceptions

    [ https://issues.apache.org/jira/browse/CASSANDRA-10957?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15081406#comment-15081406 ] 

Joshua McKenzie commented on CASSANDRA-10957:
---------------------------------------------

An alternative (to make it more cross-platform friendly) might be to attempt to write a file to temp and read it to confirm the disk is working when we hit this path in the stability inspector.

> Verify disk is readable on FileNotFound Exceptions
> --------------------------------------------------
>
>                 Key: CASSANDRA-10957
>                 URL: https://issues.apache.org/jira/browse/CASSANDRA-10957
>             Project: Cassandra
>          Issue Type: Improvement
>            Reporter: T Jake Luciani
>            Priority: Minor
>
> In JVMStabilityInspector we only mark ourselves unstable when we get some special messages in file not found exceptions.
> {code}
>         // Check for file handle exhaustion
>         if (t instanceof FileNotFoundException || t instanceof SocketException)
>             if (t.getMessage().contains("Too many open files"))
>                 isUnstable = true;
> {code}
> It seems like the OS might also have the same issue of too many open files but will instead return "No such file or directory".
> It might make more sense, when we check this exception type, to try to read a known-to-exist file to verify the disk is readable.
> This would mean creating a hidden file on startup on each data disk? other ideas?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)