You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-issues@hadoop.apache.org by "Pranay Singh (JIRA)" <ji...@apache.org> on 2018/11/13 19:40:00 UTC
[jira] [Created] (HADOOP-15928) Excessive error logging when using
HDFS in S3 environment
Pranay Singh created HADOOP-15928:
-------------------------------------
Summary: Excessive error logging when using HDFS in S3 environment
Key: HADOOP-15928
URL: https://issues.apache.org/jira/browse/HADOOP-15928
Project: Hadoop Common
Issue Type: Improvement
Reporter: Pranay Singh
Problem:
------------
There is excessive error logging when Impala uses HDFS in S3 environment, this issue is caused because of defect HADOOP-14603 "S3A input stream to support ByteBufferReadable"
Excessive error logging results in defect IMPALA-5256: "ERROR log files can get very large". This causes the error log files to be huge.
The following message is printed repeatedly in the error log:
UnsupportedOperationException: Byte-buffer read unsupported by input streamjava.lang.UnsupportedOperationException: Byte-buffer read unsupported by input stream
at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:150)
Root cause
----------------
After investigating the issue, it appears that the above exception is printed because
when a file is opened via hdfsOpenFileImpl() calls readDirect() which is hitting this
exception.
Fix:
----
Since the hdfs client is not initiating the byte buffered read but is happening in a implicit manner, we should not be generating the error log during open of a file.
--
This message was sent by Atlassian JIRA
(v7.6.3#76005)
---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org