You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-issues@hadoop.apache.org by "ASF GitHub Bot (Jira)" <ji...@apache.org> on 2021/02/09 04:26:00 UTC

[jira] [Work logged] (HADOOP-17338) Intermittent S3AInputStream failures: Premature end of Content-Length delimited message body etc

     [ https://issues.apache.org/jira/browse/HADOOP-17338?focusedWorklogId=550038&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-550038 ]

ASF GitHub Bot logged work on HADOOP-17338:
-------------------------------------------

                Author: ASF GitHub Bot
            Created on: 09/Feb/21 04:25
            Start Date: 09/Feb/21 04:25
    Worklog Time Spent: 10m 
      Work Description: yzhangal opened a new pull request #2692:
URL: https://github.com/apache/hadoop/pull/2692


   …ontent-Length delimited message body etc
   
   ## NOTICE
   
   Please create an issue in ASF JIRA before opening a pull request,
   and you need to set the title of the pull request which starts with
   the corresponding JIRA issue number. (e.g. HADOOP-XXXXX. Fix a typo in YYY.)
   For more details, please see https://cwiki.apache.org/confluence/display/HADOOP/How+To+Contribute
   


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


Issue Time Tracking
-------------------

    Worklog Id:     (was: 550038)
    Time Spent: 4.5h  (was: 4h 20m)

> Intermittent S3AInputStream failures: Premature end of Content-Length delimited message body etc
> ------------------------------------------------------------------------------------------------
>
>                 Key: HADOOP-17338
>                 URL: https://issues.apache.org/jira/browse/HADOOP-17338
>             Project: Hadoop Common
>          Issue Type: Sub-task
>          Components: fs/s3
>    Affects Versions: 3.3.0
>            Reporter: Yongjun Zhang
>            Assignee: Yongjun Zhang
>            Priority: Major
>              Labels: pull-request-available
>             Fix For: 3.3.1
>
>         Attachments: HADOOP-17338.001.patch
>
>          Time Spent: 4.5h
>  Remaining Estimate: 0h
>
> We are seeing the following two kinds of intermittent exceptions when using S3AInputSteam:
> 1.
> {code:java}
> Caused by: com.amazonaws.thirdparty.apache.http.ConnectionClosedException: Premature end of Content-Length delimited message body (expected: 156463674; received: 150001089
> at com.amazonaws.thirdparty.apache.http.impl.io.ContentLengthInputStream.read(ContentLengthInputStream.java:178)
> at com.amazonaws.thirdparty.apache.http.conn.EofSensorInputStream.read(EofSensorInputStream.java:135)
> at com.amazonaws.internal.SdkFilterInputStream.read(SdkFilterInputStream.java:82)
> at com.amazonaws.event.ProgressInputStream.read(ProgressInputStream.java:180)
> at com.amazonaws.internal.SdkFilterInputStream.read(SdkFilterInputStream.java:82)
> at com.amazonaws.services.s3.internal.S3AbortableInputStream.read(S3AbortableInputStream.java:125)
> at com.amazonaws.internal.SdkFilterInputStream.read(SdkFilterInputStream.java:82)
> at com.amazonaws.internal.SdkFilterInputStream.read(SdkFilterInputStream.java:82)
> at com.amazonaws.internal.SdkFilterInputStream.read(SdkFilterInputStream.java:82)
> at com.amazonaws.event.ProgressInputStream.read(ProgressInputStream.java:180)
> at com.amazonaws.internal.SdkFilterInputStream.read(SdkFilterInputStream.java:82)
> at com.amazonaws.util.LengthCheckInputStream.read(LengthCheckInputStream.java:107)
> at com.amazonaws.internal.SdkFilterInputStream.read(SdkFilterInputStream.java:82)
> at org.apache.hadoop.fs.s3a.S3AInputStream.read(S3AInputStream.java:181)
> at java.io.DataInputStream.readFully(DataInputStream.java:195)
> at java.io.DataInputStream.readFully(DataInputStream.java:169)
> at org.apache.parquet.hadoop.ParquetFileReader$ConsecutiveChunkList.readAll(ParquetFileReader.java:779)
> at org.apache.parquet.hadoop.ParquetFileReader.readNextRowGroup(ParquetFileReader.java:511)
> at org.apache.parquet.hadoop.InternalParquetRecordReader.checkRead(InternalParquetRecordReader.java:130)
> at org.apache.parquet.hadoop.InternalParquetRecordReader.nextKeyValue(InternalParquetRecordReader.java:214)
> at org.apache.parquet.hadoop.ParquetRecordReader.nextKeyValue(ParquetRecordReader.java:227)
> at org.apache.hadoop.hive.ql.io.parquet.read.ParquetRecordReaderWrapper.next(ParquetRecordReaderWrapper.java:208)
> at org.apache.hadoop.hive.ql.io.parquet.read.ParquetRecordReaderWrapper.next(ParquetRecordReaderWrapper.java:63)
> at org.apache.hadoop.hive.ql.io.HiveContextAwareRecordReader.doNext(HiveContextAwareRecordReader.java:350)
> ... 15 more
> {code}
> 2.
> {code:java}
> Caused by: javax.net.ssl.SSLException: SSL peer shut down incorrectly
> at sun.security.ssl.InputRecord.readV3Record(InputRecord.java:596)
> at sun.security.ssl.InputRecord.read(InputRecord.java:532)
> at sun.security.ssl.SSLSocketImpl.readRecord(SSLSocketImpl.java:990)
> at sun.security.ssl.SSLSocketImpl.readDataRecord(SSLSocketImpl.java:948)
> at sun.security.ssl.AppInputStream.read(AppInputStream.java:105)
> at com.amazonaws.thirdparty.apache.http.impl.io.SessionInputBufferImpl.streamRead(SessionInputBufferImpl.java:137)
> at com.amazonaws.thirdparty.apache.http.impl.io.SessionInputBufferImpl.read(SessionInputBufferImpl.java:198)
> at com.amazonaws.thirdparty.apache.http.impl.io.ContentLengthInputStream.read(ContentLengthInputStream.java:176)
> at com.amazonaws.thirdparty.apache.http.conn.EofSensorInputStream.read(EofSensorInputStream.java:135)
> at com.amazonaws.internal.SdkFilterInputStream.read(SdkFilterInputStream.java:82)
> at com.amazonaws.event.ProgressInputStream.read(ProgressInputStream.java:180)
> at com.amazonaws.internal.SdkFilterInputStream.read(SdkFilterInputStream.java:82)
> at com.amazonaws.internal.SdkFilterInputStream.read(SdkFilterInputStream.java:82)
> at com.amazonaws.internal.SdkFilterInputStream.read(SdkFilterInputStream.java:82)
> at com.amazonaws.event.ProgressInputStream.read(ProgressInputStream.java:180)
> at com.amazonaws.internal.SdkFilterInputStream.read(SdkFilterInputStream.java:82)
> at com.amazonaws.util.LengthCheckInputStream.read(LengthCheckInputStream.java:107)
> at com.amazonaws.internal.SdkFilterInputStream.read(SdkFilterInputStream.java:82)
> at com.amazonaws.services.s3.internal.S3AbortableInputStream.read(S3AbortableInputStream.java:125)
> at com.amazonaws.internal.SdkFilterInputStream.read(SdkFilterInputStream.java:82)
> at org.apache.hadoop.fs.s3a.S3AInputStream.read(S3AInputStream.java:181)
> at java.io.DataInputStream.readFully(DataInputStream.java:195)
> at org.apache.hadoop.io.DataOutputBuffer$Buffer.write(DataOutputBuffer.java:70)
> at org.apache.hadoop.io.DataOutputBuffer.write(DataOutputBuffer.java:120)
> at org.apache.hadoop.io.SequenceFile$Reader.next(SequenceFile.java:2361)
> at org.apache.hadoop.io.SequenceFile$Reader.next(SequenceFile.java:2493)
> at org.apache.hadoop.mapred.SequenceFileRecordReader.next(SequenceFileRecordReader.java:82)
> at cascading.tap.hadoop.io.CombineFileRecordReaderWrapper.next(CombineFileRecordReaderWrapper.java:70)
> at org.apache.hadoop.mapred.lib.CombineFileRecordReader.next(CombineFileRecordReader.java:58)
> at org.apache.hadoop.mapred.MapTask$TrackedRecordReader.moveToNext(MapTask.java:199)
> at org.apache.hadoop.mapred.MapTask$TrackedRecordReader.next(MapTask.java:185)
> ... 10 more
> {code}
> Inspired by
>  [https://stackoverflow.com/questions/9952815/s3-java-client-fails-a-lot-with-premature-end-of-content-length-delimited-messa]
>  and
>  [https://forums.aws.amazon.com/thread.jspa?threadID=83326], we got a solution that has helped us, would like to put the fix to the community version.
> The problem is that S3AInputStream had a short-lived S3Object which is used to create the wrappedSteam, and this object got garbage collected at random time, which caused the stream to be closed, thus the symptoms reported.
> [https://github.com/aws/aws-sdk-java/blob/1.11.295/aws-java-sdk-s3/src/main/java/com/amazonaws/services/s3/model/S3Object.java#L225] is the s3 code that closes the stream when S3 object is garbage collected:
> Here is the code in S3AInputStream that creates temporary S3Object and uses it to create the wrappedStream:
> {code:java}
>    S3Object object = Invoker.once(text, uri,
>         () -> client.getObject(request));
>     changeTracker.processResponse(object, operation,
>         targetPos);
>     wrappedStream = object.getObjectContent();
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org