You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-issues@hadoop.apache.org by "Steve Loughran (JIRA)" <ji...@apache.org> on 2016/08/30 12:32:21 UTC
[jira] [Created] (HADOOP-13560) make sure s3 blob >5GB files
copies, with metadata
Steve Loughran created HADOOP-13560:
---------------------------------------
Summary: make sure s3 blob >5GB files copies, with metadata
Key: HADOOP-13560
URL: https://issues.apache.org/jira/browse/HADOOP-13560
Project: Hadoop Common
Issue Type: Sub-task
Components: fs/s3
Affects Versions: 2.9.0
Reporter: Steve Loughran
Priority: Minor
An AWS SDK [issue|https://github.com/aws/aws-sdk-java/issues/367] highlights that metadata isn't copied on large copies.
1. Add a test to do that large copy/rname and verify that the copy really works
1. Verify that metadata makes it over.
Verifying large file rename is important on its own, as it is needed for very large commit operations for committers using rename
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)
---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org