You are viewing a plain text version of this content. The canonical link for it is here.
Posted to hdfs-dev@hadoop.apache.org by "Xu Chen (JIRA)" <ji...@apache.org> on 2015/08/27 07:56:46 UTC
[jira] [Created] (HDFS-8972) EINVAL Invalid argument when RAM_DISK
usage 90%+
Xu Chen created HDFS-8972:
-----------------------------
Summary: EINVAL Invalid argument when RAM_DISK usage 90%+
Key: HDFS-8972
URL: https://issues.apache.org/jira/browse/HDFS-8972
Project: Hadoop HDFS
Issue Type: Bug
Reporter: Xu Chen
Priority: Critical
the directory which is use LAZY_PERSIST policy , so use "df" command look up tmpfs is usage >=90% , run spark,hive or mapreduce application , Datanode come out following exception
{code}
2015-08-26 17:37:34,123 WARN org.apache.hadoop.io.ReadaheadPool: Failed readahead on null
EINVAL: Invalid argument
at org.apache.hadoop.io.nativeio.NativeIO$POSIX.posix_fadvise(Native Method)
at org.apache.hadoop.io.nativeio.NativeIO$POSIX.posixFadviseIfPossible(NativeIO.java:267)
at org.apache.hadoop.io.nativeio.NativeIO$POSIX$CacheManipulator.posixFadviseIfPossible(NativeIO.java:146)
at org.apache.hadoop.io.ReadaheadPool$ReadaheadRequestImpl.run(ReadaheadPool.java:206)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
{code}
And the application is slowly than without exception 25%
Regards
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)