You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-issues@hadoop.apache.org by "Steve Loughran (JIRA)" <ji...@apache.org> on 2017/02/10 12:52:42 UTC

[jira] [Commented] (HADOOP-14071) S3a: Failed to reset the request input stream

    [ https://issues.apache.org/jira/browse/HADOOP-14071?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15861215#comment-15861215 ] 

Steve Loughran commented on HADOOP-14071:
-----------------------------------------


What's happened is that the HTTP connection failed (is this long haul?), aws tried to reset the pointer (using mark/reset) and the buffer couldn't go back that far.


We saw this before, thought I'd eliminated it by not buffering the input stream, but instead sending the file input stream up direct. I'll review that code.

It may be we need to address this differently, simply by recognising the specific exception and retrying to send the block. That is: we implement the retry logic.

> S3a: Failed to reset the request input stream
> ---------------------------------------------
>
>                 Key: HADOOP-14071
>                 URL: https://issues.apache.org/jira/browse/HADOOP-14071
>             Project: Hadoop Common
>          Issue Type: Bug
>          Components: fs/s3
>    Affects Versions: 3.0.0-alpha2
>            Reporter: Seth Fitzsimmons
>
> When using the patch from HADOOP-14028, I fairly consistently get {{Failed to reset the request input stream}} exceptions. They're more likely to occur the larger the file that's being written (70GB in the extreme case, but it needs to be one file).
> {code}
> 2017-02-10 04:21:43 WARN S3ABlockOutputStream:692 - Transfer failure of block FileBlock{index=416, destFile=/tmp/hadoop-root/s3a/s3ablock-0416-4228067786955989475.tmp, state=Upload, dataSize=11591473, limit=104857600}
> 2017-02-10 04:21:43 WARN S3AInstrumentation:777 - Closing output stream statistics while data is still marked as pending upload in OutputStreamStatistics{blocksSubmitted=416, blocksInQueue=0, blocksActive=0, blockUploadsCompleted=416, blockUploadsFailed=3, bytesPendingUpload=209747761, bytesUploaded=43317747712, blocksAllocated=416, blocksReleased=416, blocksActivelyAllocated=0, exceptionsInMultipartFinalize=0, transferDuration=1389936 ms, queueDuration=519 ms, averageQueueTime=1 ms, totalUploadDuration=1390455 ms, effectiveBandwidth=3.1153649497466657E7 bytes/s}
> at org.apache.hadoop.fs.s3a.S3AUtils.extractException(S3AUtils.java:200)
> at org.apache.hadoop.fs.s3a.S3AUtils.translateException(S3AUtils.java:128)
> Exception in thread "main" org.apache.hadoop.fs.s3a.AWSClientIOException: Multi-part upload with id 'Xx.ezqT5hWrY1W92GrcodCip88i8rkJiOcom2nuUAqHtb6aQX__26FYh5uYWKlRNX5vY5ktdmQWlOovsbR8CLmxUVmwFkISXxDRHeor8iH9nPhI3OkNbWJJBLrvB3xLUuLX0zvGZWo7bUrAKB6IGxA--' to 2017/planet-170206.orc on 2017/planet-170206.orc: com.amazonaws.ResetException: Failed to reset the request input stream; If the request involves an input stream, the maximum stream buffer size can be configured via request.getRequestClientOptions().setReadLimit(int): Failed to reset the request input stream; If the request involves an input stream, the maximum stream buffer size can be configured via request.getRequestClientOptions().setReadLimit(int)
> at org.apache.hadoop.fs.s3a.S3ABlockOutputStream$MultiPartUpload.waitForAllPartUploads(S3ABlockOutputStream.java:539)
> at org.apache.hadoop.fs.s3a.S3ABlockOutputStream$MultiPartUpload.access$100(S3ABlockOutputStream.java:456)
> at org.apache.hadoop.fs.s3a.S3ABlockOutputStream.close(S3ABlockOutputStream.java:351)
> at org.apache.hadoop.fs.FSDataOutputStream$PositionCache.close(FSDataOutputStream.java:72)
> at org.apache.hadoop.fs.FSDataOutputStream.close(FSDataOutputStream.java:101)
> at org.apache.orc.impl.PhysicalFsWriter.close(PhysicalFsWriter.java:221)
> at org.apache.orc.impl.WriterImpl.close(WriterImpl.java:2827)
> at net.mojodna.osm2orc.standalone.OsmPbf2Orc.convert(OsmPbf2Orc.java:296)
> at net.mojodna.osm2orc.Osm2Orc.main(Osm2Orc.java:47)
> Caused by: com.amazonaws.ResetException: Failed to reset the request input stream; If the request involves an input stream, the maximum stream buffer size can be configured via request.getRequestClientOptions().setReadLimit(int)
> at com.amazonaws.http.AmazonHttpClient$RequestExecutor.resetRequestInputStream(AmazonHttpClient.java:1221)
> at com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeOneRequest(AmazonHttpClient.java:1042)
> at com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeHelper(AmazonHttpClient.java:948)
> at org.apache.hadoop.fs.s3a.SemaphoredDelegatingExecutor$CallableWithPermitRelease.call(SemaphoredDelegatingExecutor.java:222)
> at com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeWithTimer(AmazonHttpClient.java:635)
> at com.amazonaws.http.AmazonHttpClient$RequestExecutor.execute(AmazonHttpClient.java:618)
> at com.amazonaws.http.AmazonHttpClient$RequestExecutor.doExecute(AmazonHttpClient.java:661)
> at com.amazonaws.http.AmazonHttpClient$RequestExecutionBuilderImpl.execute(AmazonHttpClient.java:573)
> at com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:445)
> at com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:4041)
> at com.amazonaws.services.s3.AmazonS3Client.doUploadPart(AmazonS3Client.java:3041)
> at com.amazonaws.services.s3.AmazonS3Client.uploadPart(AmazonS3Client.java:3026)
> at org.apache.hadoop.fs.s3a.S3AFileSystem.uploadPart(S3AFileSystem.java:1114)
> at org.apache.hadoop.fs.s3a.S3ABlockOutputStream$MultiPartUpload$1.call(S3ABlockOutputStream.java:501)
> at org.apache.hadoop.fs.s3a.S3ABlockOutputStream$MultiPartUpload$1.call(S3ABlockOutputStream.java:492)
> at org.apache.hadoop.fs.s3a.SemaphoredDelegatingExecutor$CallableWithPermitRelease.call(SemaphoredDelegatingExecutor.java:222)
> at com.amazonaws.http.AmazonHttpClient$RequestExecutor.access$300(AmazonHttpClient.java:586)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)
> at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> ... 20 more
> Caused by: java.io.IOException: Resetting to invalid mark
> at java.io.BufferedInputStream.reset(BufferedInputStream.java:448)
> at com.amazonaws.internal.SdkFilterInputStream.reset(SdkFilterInputStream.java:102)
> at com.amazonaws.event.ProgressInputStream.reset(ProgressInputStream.java:169)
> at com.amazonaws.internal.SdkFilterInputStream.reset(SdkFilterInputStream.java:102)
> at com.amazonaws.http.AmazonHttpClient$RequestExecutor.resetRequestInputStream(AmazonHttpClient.java:1219)
> at com.amazonaws.internal.SdkBufferedInputStream.reset(SdkBufferedInputStream.java:106)
> {code}
> Potentially relevant: https://github.com/aws/aws-sdk-java/issues/427



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org