You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-issues@hadoop.apache.org by "Steve Loughran (JIRA)" <ji...@apache.org> on 2016/06/13 15:13:21 UTC

[jira] [Commented] (HADOOP-13262) set multipart delete timeout to 5 * 60s in S3ATestUtils.createTestFileSystem

    [ https://issues.apache.org/jira/browse/HADOOP-13262?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15327568#comment-15327568 ] 

Steve Loughran commented on HADOOP-13262:
-----------------------------------------

stack of example failure
{code}
code}
Running org.apache.hadoop.fs.s3a.TestS3ABlockingThreadPool
Tests run: 2, Failures: 0, Errors: 2, Skipped: 0, Time elapsed: 18.818 sec <<< FAILURE! - in org.apache.hadoop.fs.s3a.TestS3ABlockingThreadPool
testRegularMultiPartUpload(org.apache.hadoop.fs.s3a.TestS3ABlockingThreadPool)  Time elapsed: 7.491 sec  <<< ERROR!
org.apache.hadoop.fs.s3a.AWSClientIOException: saving output on tests3a/1a868efc-3a49-4407-9b36-9265743b5db6: com.amazonaws.AmazonClientException: Unable to complete multi-part upload. Individual part upload failed : The specified upload does not exist. The upload ID may be invalid, or the upload may have been aborted or completed. (Service: Amazon S3; Status Code: 404; Error Code: NoSuchUpload; Request ID: ED572BAB993A2DC4): Unable to complete multi-part upload. Individual part upload failed : The specified upload does not exist. The upload ID may be invalid, or the upload may have been aborted or completed. (Service: Amazon S3; Status Code: 404; Error Code: NoSuchUpload; Request ID: ED572BAB993A2DC4)
	at org.apache.hadoop.fs.s3a.S3AUtils.translateException(S3AUtils.java:84)
	at org.apache.hadoop.fs.s3a.S3AOutputStream.close(S3AOutputStream.java:123)
	at org.apache.hadoop.fs.FSDataOutputStream$PositionCache.close(FSDataOutputStream.java:72)
	at org.apache.hadoop.fs.FSDataOutputStream.close(FSDataOutputStream.java:106)
	at org.apache.hadoop.fs.contract.ContractTestUtils.generateTestFile(ContractTestUtils.java:864)
	at org.apache.hadoop.fs.contract.ContractTestUtils.createAndVerifyFile(ContractTestUtils.java:892)
	at org.apache.hadoop.fs.s3a.TestS3ABlockingThreadPool.testRegularMultiPartUpload(TestS3ABlockingThreadPool.java:68)
Caused by: com.amazonaws.AmazonClientException: Unable to complete multi-part upload. Individual part upload failed : The specified upload does not exist. The upload ID may be invalid, or the upload may have been aborted or completed. (Service: Amazon S3; Status Code: 404; Error Code: NoSuchUpload; Request ID: ED572BAB993A2DC4)
	at com.amazonaws.services.s3.transfer.internal.CompleteMultipartUpload.collectPartETags(CompleteMultipartUpload.java:122)
	at com.amazonaws.services.s3.transfer.internal.CompleteMultipartUpload.call(CompleteMultipartUpload.java:85)
	at com.amazonaws.services.s3.transfer.internal.CompleteMultipartUpload.call(CompleteMultipartUpload.java:38)
	at org.apache.hadoop.fs.s3a.BlockingThreadPoolExecutorService$CallableWithPermitRelease.call(BlockingThreadPoolExecutorService.java:239)
	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
	at java.lang.Thread.run(Thread.java:745)
Caused by: com.amazonaws.services.s3.model.AmazonS3Exception: The specified upload does not exist. The upload ID may be invalid, or the upload may have been aborted or completed. (Service: Amazon S3; Status Code: 404; Error Code: NoSuchUpload; Request ID: ED572BAB993A2DC4)
	at com.amazonaws.http.AmazonHttpClient.handleErrorResponse(AmazonHttpClient.java:1182)
	at com.amazonaws.http.AmazonHttpClient.executeOneRequest(AmazonHttpClient.java:770)
	at com.amazonaws.http.AmazonHttpClient.executeHelper(AmazonHttpClient.java:489)
	at com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:310)
	at com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:3785)
	at com.amazonaws.services.s3.AmazonS3Client.doUploadPart(AmazonS3Client.java:2921)
	at com.amazonaws.services.s3.AmazonS3Client.uploadPart(AmazonS3Client.java:2906)
	at com.amazonaws.services.s3.transfer.internal.UploadPartCallable.call(UploadPartCallable.java:33)
	at com.amazonaws.services.s3.transfer.internal.UploadPartCallable.call(UploadPartCallable.java:23)
	at org.apache.hadoop.fs.s3a.BlockingThreadPoolExecutorService$CallableWithPermitRelease.call(BlockingThreadPoolExecutorService.java:239)
	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
	at java.lang.Thread.run(Thread.java:745)

testFastMultiPartUpload(org.apache.hadoop.fs.s3a.TestS3ABlockingThreadPool)  Time elapsed: 11.249 sec  <<< ERROR!
java.io.FileNotFoundException: Multi-part upload with id 'jFO63Jn9nnLWYp17xOMOXlZE6A3kBHLNfRydOFYkd1TJKESP7ZgLCE4OPWhV2rluUdKysiC4XsnxFxYfMmXIqg--' on tests3a/5f2bd5c5-5482-4e57-8836-8ec228e87a61: com.amazonaws.services.s3.model.AmazonS3Exception: The specified upload does not exist. The upload ID may be invalid, or the upload may have been aborted or completed. (Service: Amazon S3; Status Code: 404; Error Code: NoSuchUpload; Request ID: ADFB4ECB8AA92906), S3 Extended Request ID: IPOATzKHoEoWXlgogqfM3PB9x8m8TNwlqywNjE1f8JvPNn6RdQqxoxzhFTa5fTAbk4M3ef7XEGw=
	at org.apache.hadoop.fs.s3a.S3AUtils.translateException(S3AUtils.java:106)
	at org.apache.hadoop.fs.s3a.S3AUtils.extractException(S3AUtils.java:141)
	at org.apache.hadoop.fs.s3a.S3AFastOutputStream$MultiPartUpload.waitForAllPartUploads(S3AFastOutputStream.java:365)
	at org.apache.hadoop.fs.s3a.S3AFastOutputStream$MultiPartUpload.access$100(S3AFastOutputStream.java:319)
	at org.apache.hadoop.fs.s3a.S3AFastOutputStream.close(S3AFastOutputStream.java:254)
	at org.apache.hadoop.fs.FSDataOutputStream$PositionCache.close(FSDataOutputStream.java:72)
	at org.apache.hadoop.fs.FSDataOutputStream.close(FSDataOutputStream.java:106)
{code}


> set multipart delete timeout to 5 * 60s in S3ATestUtils.createTestFileSystem
> ----------------------------------------------------------------------------
>
>                 Key: HADOOP-13262
>                 URL: https://issues.apache.org/jira/browse/HADOOP-13262
>             Project: Hadoop Common
>          Issue Type: Sub-task
>          Components: fs/s3, test
>    Affects Versions: 2.8.0
>         Environment: parallel test runs
>            Reporter: Steve Loughran
>            Assignee: Steve Loughran
>
> HADOOP-13139 patch 003 test runs show that the multipart tests are failing on parallel runs. The cause of this is that the FS init logic in {{S3ATestUtils.createTestFileSystem}} sets the expiry to 0: any in-progress multipart uploads will fail. 
> setting a 5 minute expiry will clean up from old runs, but not break anything in progress.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org